[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: OpenShift v3 PEP



----- Original Message -----
> From: "Clayton Coleman" <ccoleman redhat com>
> To: "Ben Parees" <bparees redhat com>
> Cc: "Openshift Dev" <dev lists openshift redhat com>, users lists openshift redhat com
> Sent: Friday, July 25, 2014 11:51:27 AM
> Subject: Re: OpenShift v3 PEP
> 
> 
> 
> ----- Original Message -----
> > ----- Original Message -----
> > > From: "Clayton Coleman" <ccoleman redhat com>
> > > To: "Ben Parees" <bparees redhat com>
> > > Cc: "Openshift Dev" <dev lists openshift redhat com>,
> > > users lists openshift redhat com
> > > Sent: Friday, July 25, 2014 10:40:07 AM
> > > Subject: Re: OpenShift v3 PEP
> > > 
> > > 
> > > 
> > > ----- Original Message -----
> > > > ----- Original Message -----
> > > > > From: "Clayton Coleman" <ccoleman redhat com>
> > > > > To: "Openshift Dev" <dev lists openshift redhat com>,
> > > > > users lists openshift redhat com
> > > > > Sent: Thursday, July 24, 2014 8:44:06 PM
> > > > > Subject: OpenShift v3 PEP
> > > > > 
> > > > > A draft version of the OpenShift v3 (that's system level version, not
> > > > > release
> > > > > version) PEP is now available at
> > > > > 
> > > > >  https://github.com/openshift/openshift-pep/blob/master/openshift-pep-013-openshift-3.md
> > > > > 
> > > > > In v3 there are a few major transitions being proposed.
> > > > > 
> > > > > First, Docker.  No surprises there - we're committed to evolving our
> > > > > OpenShift container story with gears and cartridges to take advantage
> > > > > of
> > > > > images and the community that has emerged around Linux containers,
> > > > > and
> > > > > to
> > > > > enable execution of containers at the largest scales while giving
> > > > > operators
> > > > > control over how software updates flow through an organization.  At
> > > > > the
> > > > > same
> > > > > time, many developers on Windows or Mac won't know or care about the
> > > > > underlying container runtime, and only the improvements we can offer
> > > > > them
> > > > > for building and customizing their cartridges and application source
> > > > > code
> > > > > will matter.  With Docker, we want any image anyone creates to be
> > > > > usable
> > > > > as
> > > > > a component of your application.
> > > > > 
> > > > > Second, the introduction of Kubernetes [1] as a core component of the
> > > > > OpenShift system.  Kubernetes would be the foundational code for what
> > > > > we
> > > > > call the Broker - major elements of the OpenShift API, plugins and
> > > > > integration with authentication sources, and the scheduling and
> > > > > orchestration code that puts gears onto hosts.  We believe strongly
> > > > > in
> > > > > the
> > > > > model and principles that Google launched the project with, and we
> > > > > think
> > > > > that there is a lot of value to bring to developers and operators.  A
> > > > > key
> > > > > component of that change is allowing increasing sophistication of
> > > > > deployment
> > > > > and topology and exposing even more ways of putting applications
> > > > > together.
> > > > > We also wish to bring in the expertise of the big data communities
> > > > > and
> > > > > their job scheduling experience through the Kubernetes project -
> > > > > letting
> > > > > administrators coschedule batch jobs and containers together on
> > > > > infrastructure.  And we want to make the underlying infrastructure
> > > > > OpenShift
> > > > > runs on be available for small and medium deployments i!
> > > > >  n a painless, easy to setup way, with close integration with systemd
> > > > >  and
> > > > >  RHEL7.
> > > > > 
> > > > > Third, we want to focus on enabling more complex application
> > > > > topologies
> > > > > than
> > > > > the 2.x model currently allows.  That means relaxing restrictions
> > > > > originally
> > > > > put in place to make things easy for new users.  With 3.x, you'll be
> > > > > able
> > > > > to
> > > > > tie tiers of software together organically and manage the
> > > > > relationships
> > > > > between those containers.  Some easy goals: run databases as
> > > > > standalone
> > > > > services, wire them up to multiple web tiers, and assemble multiple
> > > > > web
> > > > > applications into modern service oriented architectures.  A key goal
> > > > > is
> > > > > enabling clustered software like MongoDB replica sets, MySQL
> > > > > master-slave
> > > > > clusters, Cassandra, and others, and being able to create templates
> > > > > that
> > > > > let
> > > > > you deploy those configurations repeatedly.  At the same time, we
> > > > > want
> > > > > to
> > > > > make it easier to depend on other *aaS solutions like OpenStack and
> > > > > our
> > > > > partners in the OpenShift Marketplace, so wiring your software up to
> > > > > external components and depending on them will be trivial.
> > > > > 
> > > > > Fourth, integrating the core OpenShift technology with the xPaaS and
> > > > > Fabric
> > > > > communities will enable new opportunities for those building on top
> > > > > of
> > > > > Java
> > > > > middleware.  We're very excited about ensuring that the Docker and
> > > > > topology
> > > > > capabilities we're planning match well with the powerful
> > > > > compositional
> > > > > tools
> > > > > for messaging, microservices, and event pipelining that Fabric will
> > > > > bring.
> > > > > 
> > > > > Finally, compatibility between 2.x (oo4) and 3.x (oo-next) will be
> > > > > critically
> > > > > important.  While the introduction of Docker represents a big shift
> > > > > at
> > > > > the
> > > > > host level, we plan on ensuring that existing OpenShift applications
> > > > > can
> > > > > move into Docker containers with minimal effort.  In addition,
> > > > > transitioning
> > > > > across deployments will be even easier given the new routing layer.
> > > > > 
> > > > > The PEP has a lot of new terminology; as OpenShift has evolved, we've
> > > > > refined
> > > > > use cases underneath some of our existing terms and found we've
> > > > > drifted
> > > > > rather far from where we started.  Many of the new names in the PEP
> > > > > are
> > > > > simply restatements of our old concepts, and not all of them may
> > > > > survive
> > > > > the
> > > > > subsequent discussions and implementations.  Here's a quick mapping
> > > > > to
> > > > > the
> > > > > PEP:
> > > > > 
> > > > >  Cartridge -> Image Repository
> > > > >   (a cartridge is a way of providing updated binaries to end users
> > > > >   without
> > > > >   disruption, and an image repo should be similar)
> > > > 
> > > > This seems like a slightly mismatched mapping since the image
> > > > repository
> > > > will
> > > > contain (I think?) both fully runnable images (equivalent to a binary
> > > > deploy..no build step) and builder images (need to be combined with
> > > > source
> > > > to be run).
> > > > 
> > > > Cartridges today are much more the latter than the former (aside from
> > > > the
> > > > template code, they have no runnable source associated).
> > > 
> > > Like MySQL, Postgres, Redis, phpMyAdmin, MongoDB, and others?
> > 
> > Examples of carts that don't need source added doesn't negate the fact that
> > many do.  And even so, I still think there is a difference between a
> > generic
> > postgres image, and one a user might have tweaked for a particular
> > application deployment (via config settings, for example).
> > 
> > So I still think Image Repository is a superset of what we'd consider a
> > "cartridge" today, that's all I'm trying to say.  I'm not against the term,
> > though I do think there is room for some sort of a "template" indicator (to
> > indicate whether the image is intended to be customized prior to
> > deployment), along with the "builder" indicator (which indicates the image
> > needs to be sent through a build step before it can be run)
> > 
> > Those would be as compared with a true binary deployment image, in which
> > the
> > image is runnable as is (either because it was packaged that way in the
> > first place, or because it is the product of an STI build, or something
> > else)
> 
> Let's go a step further:
> 
> - some images can be deployed (in a real system, probably 99.9%)
> - all images can be the basis for a Docker build
> - some images can be the basis for a Source-to-Images build (what you call a
> "builder" above)
> - some images which are the basis for a Source-to-Images build can be
> directly deployed
> - all images can be the basis for a Docker build that creates an image that
> can be the basis for a Source-to-Images build
> - it should be possible to turn almost any image repository into a
> Source-to-Images repository by providing an assemble script
> 
> Therefore, we should:
> 
> 1. Identify repositories that can be the basis for a source-to-images build
> 2. Make it easy to enable a repository to be the basis for a source-to-images
> build
> 
> Can you define "template" a bit more?

Template was trying to describe images that need some customization (eg password settings, certificates, other config) before they should be run, as distinct from images which can be started as is.  But i'm no longer certain it's really a useful concept.

> 
> > 
> > > 
> > > > 
> > > > There may not be an appropriate term/mapping in v3, but I just wanted
> > > > to
> > > > point out the expansion in meaning associated with a v3 image
> > > > repository,
> > > > as
> > > > compared with a v2 cartridge.
> > > > 
> > > > 
> > > > On a related note, I'm not entirely clear (after reading the PEP) on
> > > > the
> > > > difference between an image stream and an image repository.  Are they
> > > > being
> > > > used interchangeably?  Does an image stream simply wrapper additional
> > > > metadata around an image repository?
> > > 
> > > The pep is being updated to use image repository consistently.  Since
> > > that's
> > > the docker term for the same concept, using stream introduces unnecessary
> > > confusion.  I'm updating the images and PEP now.
> > > 
> > > > 
> > > > 
> > > > 
> > > > > 
> > > > >  Domain -> Project
> > > > >   (domains today map very closely to projects)
> > > > > 
> > > > >  Application/Gear Group -> Service
> > > > >   (application has a lot of connotations, a service is a bit more
> > > > >   constrained)
> > > > > 
> > > > >  Gear -> Container
> > > > >   (a gear is already a container)
> > > > > 
> > > > > 
> > > > > This is a big chunk of PEP, but it represents a lot of discussions
> > > > > and
> > > > > suggestions from many people across the OpenShift community.  Please
> > > > > leave
> > > > > feedback or ask questions as you can - help us sketch out the future
> > > > > of
> > > > > OpenShift!
> > > > > 
> > > > > [1] https://github.com/GoogleCloudPlatform/kubernetes
> > > > > 
> > > > > Clayton Coleman | Lead Engineer, Red Hat OpenShift
> > > > > 
> > > > > _______________________________________________
> > > > > dev mailing list
> > > > > dev lists openshift redhat com
> > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> > > > > 
> > > > 
> > > 
> > 
> 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]