[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Kolab / Complex Distributed Application in OpenShift





> On Nov 5, 2014, at 5:43 AM, Jeroen van Meeuwen (Kolab Systems) <vanmeeuwen kolabsys com> wrote:
> 
> On 2014-11-03 23:57, Clayton Coleman wrote:
>>> On Nov 3, 2014, at 5:59 AM, Jeroen van Meeuwen (Kolab Systems) <vanmeeuwen kolabsys com> wrote:
>>> == LDAP ==
>>> Either service's setup procedure depends on an FQDN that resolves forwards and backwards, yet Docker containers normally have a hostname without a domain name space (the hostname being the container ID, specifying the -h option on the command-line sets the hostname).
>>> As far as I'm aware, it is not possible to set a domain name space in OpenShift, and it's not possible to set a container's FQDN in OpenShift either. From where I'm sitting, having gotten only as far as this, this prevents Directory Services from being deployed as part of an OpenShift v2 cartridge / v3 template.
>> We plan at least on paper to expose internal DNS names to the cluster
>> that would resolve to services (name.namespace.local, perhaps).  Paul
>> Weil can describe more about where kube is on this, but the intent
>> would be that services can resolve inside the cluster.
> 
> While I can define the "ldap" service, connecting to the container providing it seems the lesser problem -- as such it seems that for this particular issue I would need no resolving of services between containers, I can just connect to what the relevant service's environment variables tell me, and connect through whatever proxy is taking on the service.
> 
> When names for services are exposed through DNS, I suppose I need only specify that yay pod provides yay service, and connect to yay.namespace.local directly -- eliminating the need to specify the "service"? This would be great for "internal services" that require no exposure to the outside world (ldap, sql, internal restful apis, ...).

You would still have to define a yay service - pods aren't "special" and so we aren't planning on having DNS point to them directly (except possibly by having multiple IPs in a service DNS entry).

Services do not expose you to the outside world - that's TBD but is probably a combination of external IP addresses and routes (entries in a proxy routing table at L3 or L7)

> 
> In case of the LDAP setup though, where what is otherwise just "989a663bc081", even just an (extra) /etc/hosts entry of "989a663bc081.local" would allow the setup to continue.

I believe services would get a reachable DNS entry, but not an etc hosts file entry.  The reason being is that the service is the mechanism by which those links are exposed.  In the absence of an internal DNS server we might still expose this in the container as "servicename.local"

> 
> It is only LDAP server replication topology setups that would require "the other" LDAP server's names be available and addresses to resolve (directly, not via a defined service), but I was thinking allowing LDAP replication scenarios would be a completely different milestone for me.

That's been discussed under nominal services and things like zookeeper and etcd (which need those addresses).  I would prefer there be an easy way to get at that info but we aren't there quite yet.  It's an active discussion point.

> 
>>> == Storage Persistence ==
>>> The life-cycle of a single container being of such a relatively short period of time, I'm looking to manage a level of storage persistence; volumes such as /srv/{dirsrv,mysql,imap} come to mind.
>>> I'm not sure how this translates to OpenShift (v3), as while I have found that the persistence layer is provided through an OPENSHIFT_DATA_DIR environment variable pointing to some persistent storage (right?), it seems to not translate to a mount-point or static container-local filesystem hierarchy I can consistently refer to from within the container (it would be /home/$magic/someplace/ rather than /srv/dirsrv/)?
>> The current plan is that Openshift will expose lazily provisioned
>> storage volumes as a resource type (I want X gig of unshared high io
>> storage), and make those easy to add to individual pods.  In your pod
>> definition you'd be able to reference a volume by name and your pods
>> would be mounted and started with that volume at the appropriate
>> directory.
> 
> That sounds exactly like what I would need. Are you aware of a timeline on this, and/or whether it is among the "must have" features for OpenShift v3 stable?

The goal for Openshift 3 would be to have the basic framework in place and be able to back that with NFS, a EBS, Cinder, and hopefully Ceph.  I'm hoping active development on that kicks off soon.

> 
>>> == Inclusion of Other Cartridges / Templates ==
>>> (...)
>>> == Run-time Configuration / Dependencies ==
>>> (...)
> 
> Indeed perhaps best delved in to separately.
> 
> Kind regards,
> 
> Jeroen van Meeuwen
> 
> -- 
> Systems Architect, Kolab Systems AG
> 
> e: vanmeeuwen at kolabsys.com
> m: +41 79 951 9003
> w: http://www.kolabsys.com
> 
> pgp: 9342 BF08


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]