[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Continuous Deployment



The current HAProxy setup is designed as a scaling solution and we're working through the pieces to get to the point where it can be balanced in an available manner.  Doing a traffic bleed with haproxy isn't out of the realm of possibility (as Mike noted), but HAProxy in our deployment is not configured as highly available itself - a meteor strike targeting the server (node) containing the haproxy instance for your app (haproxy gear) would result in your DNS name resolving to a dead IP.  So today, as OpenShift exists at this moment, to get a highly available web tier that could survive the death of the node containing the haproxy gear, would require your own load balancer with a hotfailover, two apps, and any custom logic you wanted.  From within the app itself today, it is not possible (I think?) to configure HAProxy arbitrarily or insert scripting logic.  

Also, the way that git pushes are handled is that you push to the head gear (the one with haproxy) and then a post commit hook ensures the changes make it to the other gears.  It would certainly be the desirable end goal so that pushes to each gear correspond to the load balancer taking the app out of rotation.  We've roughly described two categories of push - full-restart and rolling-restart - with only the former implemented today.  With Jenkins in the picture (Jenkins does a build, then coordinates the change being pushed) you'd want similar behavior.  There are some hard statefulness problems there (what happens if the head gear gets hit by lightning in the middle of a graceful) that I'm sure mod_cluster has at least some answers for.

----- Original Message -----
> Having two identical applications is an interesting idea. Do I really
> need that though? It seems like there should be some way that I can
> deploy a new build while the old one is in place, even if it's only
> one application and deployment pipeline, while bleeding traffic
> using either HAProxy or mod_proxy. Not so easy I guess?
> 
> -Brian
> 
> 
> On Jan 15, 2013, at 11:45 AM, Clayton Coleman <ccoleman redhat com>
> wrote:
> 
> > 
> > 
> > ----- Original Message -----
> >> Thanks for all the thoughtful responses. We actually have two
> >> different scenarios where we want to manage a rolling deployment.
> >> One scenario is when we are rolling out a new Tomcat controller
> >> for
> >> REST services. It sounds like we would have to implement a
> >> mod_proxy
> >> cartridge to do that. The second scenario is when we are rolling
> >> out
> >> a new version of a backend service. The services tier is a
> >> message-based SOA. We plan to implement the message bus using
> >> Apache
> >> Camel over ActiveMQ. The services themselves will be simple JMS
> >> applications running on a stripped down AS7. The Camel Dynamic
> >> Router will run in an ActiveMQ cluster and manage the routing of
> >> the
> >> messages to services on the backend. I had hoped I could run all
> >> of
> >> it in OpenShift, although I'm not yet sure what kind of cartridges
> >> I'd have to create to do that.
> > 
> > W.R.T. service based architectures in openshift, the workflow we
> > had envisioned was the following:
> > 
> > A domain contains all of the applications that are related:
> >  Thus domain "mydeployment" contains 1-N applications which may be
> >  the "services" in your larger application.
> >  One of those services is the web front tier, and has a specific
> >  DNS name
> >  The other services can be accessed directly via port and should be
> >  SSL protected
> >  Shared secrets between the applications in the domain allow you to
> >  protect access to individual services
> >  Each HTTP service is just a web app
> >  Applications can share access to resources like DBs or message
> >  queues
> > 
> > In the future, we would like to expose (or have discussed
> > exposing):
> >  Per app environment variables which can be set/managed
> >  consistently either on each gear or via the REST API, but which
> >  every gear in the app has access to.
> >    Possibly shared environment variables across a domain (although
> >    this is more of a convenience)
> >  Permissions on applications that allow multiple people to push
> >  code or control the application
> >  Permissions on a domain that allow certain users to have access to
> >  all of the applications (inherited)
> >  Support for multiple domains per account
> >  Support for flexible balancing and bleeding
> > 
> > In order to reach the goals you described, the only missing piece
> > today is cartridge for MQ (which you could either supplant with a
> > SaaS solution or hypothetically a DIY cart, although both are
> > tough).  You'd be able to set up a domain load balancer in EC2
> > that would handle your primary domain name, and talk to two
> > identical applications in OpenShift for HA.  When you wanted to
> > update and bleed requests, your load balancer would have to do the
> > drain, and then you'd trigger an update on one of the apps.  You'd
> > then scale that app back up by bleeding load back to it.  Once
> > it's online you could then drain and update the other.  I realize
> > it's not ideal - mod_cluster integration or the HAProxy type
> > changes Mike described could both simplify the bleed (although you
> > would not have the HA as described above).  HA is definitely on
> > our radar and is a focus for us.
> > 
> >> 
> >> On Jan 14, 2013, at 6:03 PM,
> >> dev-request lists openshift redhat com
> >> wrote:
> >> 
> >>> 
> >>> Just floating ideas, but there's nothing fundamental that would
> >>> prevent someone putting together a mod_cluster cartridge that
> >>> supplanted haproxy, except that there is some broker work
> >>> required
> >>> to allow the config (although direct descriptor creation is still
> >>> allowed, I thought)?
> >>> 
> >>> I ask because integrated ecosystem scaling might very well offer
> >>> features haproxy can't - so in a happy ecosystem mod cluster
> >>> might
> >>> coexist with haproxy.  I guess you could also deploy a server
> >>> cart
> >>> that contains just mod cluster as well, as long as it could
> >>> execute scale up events.
> >>> 
> >>> On Jan 14, 2013, at 7:02 PM, Mike McGrath <mmcgrath redhat com>
> >>> wrote:
> >>> 
> >>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>> 
> >>>>> What are the technical challenges to adding mod_cluster support
> >>>>> to Openshift?
> >>>> 
> >>>> Just that it only works with jboss, there's always been a plan
> >>>> in
> >>>> place to
> >>>> do connection bleeding but it's not really come up until now.  I
> >>>> think
> >>>> Matt's been a big proponent since the start.
> >>>> 
> >>>> In theory we can do this with haproxy but we've not spent any
> >>>> time
> >>>> doing
> >>>> so yet.
> >>>> 
> >>>>  -Mike
> >>>> 
> >>>>> On Jan 14, 2013, at 6:45 PM, William DeCoste
> >>>>> <wdecoste redhat com> wrote:
> >>>>> 
> >>>>>    mod_cluster used with the JBoss AS/EAP domain model provides
> >>>>>    exactly this capability but neither is available in
> >>>>>    OpenShift yet although they are on the roadmap.
> >>>>> 
> >>>>>    On 1/14/13 3:36 PM, Clayton Coleman wrote:
> >>>>>    And as a side note rams email the other day floated a very
> >>>>>    similar concept (a balancer that can bleed between two
> >>>>>    apps).
> >>>>> 
> >>>>> Being able to gracefully restart very large apps is desirable -
> >>>>> if you can tolerate both being up at the same time then having
> >>>>> a
> >>>>> new copy of your app, and an old copy of you app (both scaled)
> >>>>> that is load balanced by your own balancer.  You can then
> >>>>> control
> >>>>> the scale up down in a scripted fashion.
> >>>>> 
> >>>>> On Jan 14, 2013, at 6:21 PM, William DeCoste
> >>>>> <wdecoste redhat com> wrote:
> >>>>> 
> >>>>>    Hi Brian,
> >>>>> 
> >>>>>    What mechanism would you use to control the bleed from the
> >>>>>    old to the new application? This component would most likely
> >>>>>    be out of the scope of OpenShift. You would be able to
> >>>>>    deploy both versions of the application in OpenShift side by
> >>>>>    side.
> >>>>> 
> >>>>>    What type of cartridge is the application? Would you be
> >>>>>    using
> >>>>>    OpenShift's Jenkins cartridge to manage the builds and
> >>>>>    deployments or an external Jenkins?
> >>>>> 
> >>>>>    Thanks -Bill
> >>>>> 
> >>>>> 
> >>>>>    On 1/14/13 12:57 PM, Brian DeCamp wrote:
> >>>>>    Hi,
> >>>>> I originally posted this on the community forum, but Nam
> >>>>> suggested I post it on the dev mailing list.
> >>>>> 
> >>>>> I'd like to use OpenShift Origins or Enterprise to manage a
> >>>>> Continuous Deployment pipeline for a large (30MM+ user)
> >>>>> application. Our  SOA will also A/B test every release as it
> >>>>> goes out, automatically failing a release that does not meet
> >>>>> certain thresholds for KPIs. To do this, I need to control the
> >>>>> deployment of the new build into production. When a
> >>>>> build successfully passes all our tests in Jenkins, I'd like to
> >>>>> install the new build into production without removing the old
> >>>>> build. Then I'll slowly bleed traffic over to the
> >>>>> new version while monitoring our KPIs. If all goes well, all
> >>>>> traffic will eventually use the new version and I can remove
> >>>>> the
> >>>>> old version. If not, I'll redirect all the traffic
> >>>>> back to the old version and remove the new build. Traffic
> >>>>> routing
> >>>>> will be through HA Proxy for web applications, and through a
> >>>>> custom message broker for backend services.
> >>>>> 
> >>>>> Is it possible to do this all with the deployment hooks? If
> >>>>> not,
> >>>>> is there someplace in the Origins code I can look to scope out
> >>>>> the amount of work involved to meet these
> >>>>> requirements?
> >>>>> 
> >>>>> Brian
> >>>>> 
> >>>>> 
> >>>>> 
> >>>>> _______________________________________________
> >>>>> dev mailing list
> >>>>> dev lists openshift redhat com
> >>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>> 
> >>>>> 
> >>>>> --
> >>>>> Bill DeCoste
> >>>>> Principal Software Engineer, Red Hat
> >>>>> 978-204-0920
> >>>>> wdecoste redhat com
> >>>>> 
> >>>>>    _______________________________________________
> >>>>>    dev mailing list
> >>>>>    dev lists openshift redhat com
> >>>>>    http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>> 
> >>>>> 
> >>>>> --
> >>>>> Bill DeCoste
> >>>>> Principal Software Engineer, Red Hat
> >>>>> 978-204-0920
> >>>>> wdecoste redhat com
> >>>>> 
> >>>>> 
> >>> 
> >>> 
> >>> 
> >>> ------------------------------
> >>> 
> >>> Message: 2
> >>> Date: Mon, 14 Jan 2013 18:17:48 -0600 (CST)
> >>> From: Mike McGrath <mmcgrath redhat com>
> >>> To: Clayton Coleman <ccoleman redhat com>
> >>> Cc: "dev lists openshift redhat com"
> >>> <dev lists openshift redhat com>
> >>> Subject: Re: Continuous Deployment
> >>> Message-ID: <alpine LFD 2 03 1301141816270 2603 redhat com>
> >>> Content-Type: TEXT/PLAIN; charset=US-ASCII
> >>> 
> >>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>> 
> >>>> Just floating ideas, but there's nothing fundamental that would
> >>>> prevent someone putting together a mod_cluster cartridge that
> >>>> supplanted haproxy, except that there is some broker work
> >>>> required to allow the config (although direct descriptor
> >>>> creation
> >>>> is still allowed, I thought)?
> >>>> 
> >>>> I ask because integrated ecosystem scaling might very well offer
> >>>> features haproxy can't - so in a happy ecosystem mod cluster
> >>>> might coexist with haproxy.  I guess you could also deploy a
> >>>> server cart that contains just mod cluster as well, as long as
> >>>> it
> >>>> could execute scale up events.
> >>>> 
> >>> 
> >>> Correct, this goes back to that balance between a consistent
> >>> experience
> >>> between cartridges and what the users expect.  mod_cluster may
> >>> very
> >>> well
> >>> be a better experience for the jboss users.
> >>> 
> >>> 	-Mike
> >>> 
> >>>> On Jan 14, 2013, at 7:02 PM, Mike McGrath <mmcgrath redhat com>
> >>>> wrote:
> >>>> 
> >>>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>>> 
> >>>>>> What are the technical challenges to adding mod_cluster
> >>>>>> support
> >>>>>> to Openshift?
> >>>>> 
> >>>>> Just that it only works with jboss, there's always been a plan
> >>>>> in
> >>>>> place to
> >>>>> do connection bleeding but it's not really come up until now.
> >>>>>  I
> >>>>> think
> >>>>> Matt's been a big proponent since the start.
> >>>>> 
> >>>>> In theory we can do this with haproxy but we've not spent any
> >>>>> time doing
> >>>>> so yet.
> >>>>> 
> >>>>>  -Mike
> >>>>> 
> >>>>>> On Jan 14, 2013, at 6:45 PM, William DeCoste
> >>>>>> <wdecoste redhat com> wrote:
> >>>>>> 
> >>>>>>    mod_cluster used with the JBoss AS/EAP domain model
> >>>>>>    provides
> >>>>>>    exactly this capability but neither is available in
> >>>>>>    OpenShift yet although they are on the roadmap.
> >>>>>> 
> >>>>>>    On 1/14/13 3:36 PM, Clayton Coleman wrote:
> >>>>>>    And as a side note rams email the other day floated a very
> >>>>>>    similar concept (a balancer that can bleed between two
> >>>>>>    apps).
> >>>>>> 
> >>>>>> Being able to gracefully restart very large apps is desirable
> >>>>>> -
> >>>>>> if you can tolerate both being up at the same time then having
> >>>>>> a new copy of your app, and an old copy of you app (both
> >>>>>> scaled)
> >>>>>> that is load balanced by your own balancer.  You can then
> >>>>>> control the scale up down in a scripted fashion.
> >>>>>> 
> >>>>>> On Jan 14, 2013, at 6:21 PM, William DeCoste
> >>>>>> <wdecoste redhat com> wrote:
> >>>>>> 
> >>>>>>    Hi Brian,
> >>>>>> 
> >>>>>>    What mechanism would you use to control the bleed from the
> >>>>>>    old to the new application? This component would most
> >>>>>>    likely be out of the scope of OpenShift. You would be able
> >>>>>>    to
> >>>>>>    deploy both versions of the application in OpenShift side
> >>>>>>    by
> >>>>>>    side.
> >>>>>> 
> >>>>>>    What type of cartridge is the application? Would you be
> >>>>>>    using OpenShift's Jenkins cartridge to manage the builds
> >>>>>>    and deployments or an external Jenkins?
> >>>>>> 
> >>>>>>    Thanks -Bill
> >>>>>> 
> >>>>>> 
> >>>>>>    On 1/14/13 12:57 PM, Brian DeCamp wrote:
> >>>>>>    Hi,
> >>>>>> I originally posted this on the community forum, but Nam
> >>>>>> suggested I post it on the dev mailing list.
> >>>>>> 
> >>>>>> I'd like to use OpenShift Origins or Enterprise to manage a
> >>>>>> Continuous Deployment pipeline for a large (30MM+ user)
> >>>>>> application. Our  SOA will also A/B test every release as it
> >>>>>> goes out, automatically failing a release that does not meet
> >>>>>> certain thresholds for KPIs. To do this, I need to control the
> >>>>>> deployment of the new build into production. When a
> >>>>>> build successfully passes all our tests in Jenkins, I'd like
> >>>>>> to
> >>>>>> install the new build into production without removing the old
> >>>>>> build. Then I'll slowly bleed traffic over to the
> >>>>>> new version while monitoring our KPIs. If all goes well, all
> >>>>>> traffic will eventually use the new version and I can remove
> >>>>>> the old version. If not, I'll redirect all the traffic
> >>>>>> back to the old version and remove the new build. Traffic
> >>>>>> routing will be through HA Proxy for web applications, and
> >>>>>> through a custom message broker for backend services.
> >>>>>> 
> >>>>>> Is it possible to do this all with the deployment hooks? If
> >>>>>> not,
> >>>>>> is there someplace in the Origins code I can look to scope out
> >>>>>> the amount of work involved to meet these
> >>>>>> requirements?
> >>>>>> 
> >>>>>> Brian
> >>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>>> _______________________________________________
> >>>>>> dev mailing list
> >>>>>> dev lists openshift redhat com
> >>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>> 
> >>>>>> 
> >>>>>> --
> >>>>>> Bill DeCoste
> >>>>>> Principal Software Engineer, Red Hat
> >>>>>> 978-204-0920
> >>>>>> wdecoste redhat com
> >>>>>> 
> >>>>>>    _______________________________________________
> >>>>>>    dev mailing list
> >>>>>>    dev lists openshift redhat com
> >>>>>>    http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>> 
> >>>>>> 
> >>>>>> --
> >>>>>> Bill DeCoste
> >>>>>> Principal Software Engineer, Red Hat
> >>>>>> 978-204-0920
> >>>>>> wdecoste redhat com
> >>>>>> 
> >>>>>> 
> >>>> 
> >>> 
> >>> 
> >>> 
> >>> ------------------------------
> >>> 
> >>> Message: 3
> >>> Date: Mon, 14 Jan 2013 19:54:32 -0500 (EST)
> >>> From: Clayton Coleman <ccoleman redhat com>
> >>> To: Mike McGrath <mmcgrath redhat com>
> >>> Cc: "dev lists openshift redhat com"
> >>> <dev lists openshift redhat com>
> >>> Subject: Re: Continuous Deployment
> >>> Message-ID: <20DBE284-9F2A-4A7A-8A4F-93E178D8CB26 redhat com>
> >>> Content-Type: text/plain;	charset=us-ascii
> >>> 
> >>> On Jan 14, 2013, at 7:17 PM, Mike McGrath <mmcgrath redhat com>
> >>> wrote:
> >>> 
> >>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>> 
> >>>>> Just floating ideas, but there's nothing fundamental that would
> >>>>> prevent someone putting together a mod_cluster cartridge that
> >>>>> supplanted haproxy, except that there is some broker work
> >>>>> required to allow the config (although direct descriptor
> >>>>> creation is still allowed, I thought)?
> >>>>> 
> >>>>> I ask because integrated ecosystem scaling might very well
> >>>>> offer
> >>>>> features haproxy can't - so in a happy ecosystem mod cluster
> >>>>> might coexist with haproxy.  I guess you could also deploy a
> >>>>> server cart that contains just mod cluster as well, as long as
> >>>>> it could execute scale up events.
> >>>> 
> >>>> Correct, this goes back to that balance between a consistent
> >>>> experience
> >>>> between cartridges and what the users expect.  mod_cluster may
> >>>> very well
> >>>> be a better experience for the jboss users.
> >>> 
> >>> Is the hardest part of creating a custom cart for something like
> >>> this telling Openshift to route the node proxy traffic to the
> >>> cart
> >>> as the primary?  I know the new broker code looks for git host,
> >>> but is it truly flexible enough that I could write a cart today
> >>> that when installed would get traffic?
> >>> 
> >>> Maybe that's partly a question for Dan or Krishna and the new
> >>> model
> >>> refactor.
> >>> 
> >>>> 
> >>>>  -Mike
> >>>> 
> >>>>> On Jan 14, 2013, at 7:02 PM, Mike McGrath <mmcgrath redhat com>
> >>>>> wrote:
> >>>>> 
> >>>>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>>>> 
> >>>>>>> What are the technical challenges to adding mod_cluster
> >>>>>>> support
> >>>>>>> to Openshift?
> >>>>>> 
> >>>>>> Just that it only works with jboss, there's always been a plan
> >>>>>> in place to
> >>>>>> do connection bleeding but it's not really come up until now.
> >>>>>>  I
> >>>>>> think
> >>>>>> Matt's been a big proponent since the start.
> >>>>>> 
> >>>>>> In theory we can do this with haproxy but we've not spent any
> >>>>>> time doing
> >>>>>> so yet.
> >>>>>> 
> >>>>>> -Mike
> >>>>>> 
> >>>>>>> On Jan 14, 2013, at 6:45 PM, William DeCoste
> >>>>>>> <wdecoste redhat com> wrote:
> >>>>>>> 
> >>>>>>>   mod_cluster used with the JBoss AS/EAP domain model
> >>>>>>>   provides
> >>>>>>>   exactly this capability but neither is available in
> >>>>>>>   OpenShift yet although they are on the roadmap.
> >>>>>>> 
> >>>>>>>   On 1/14/13 3:36 PM, Clayton Coleman wrote:
> >>>>>>>   And as a side note rams email the other day floated a very
> >>>>>>>   similar concept (a balancer that can bleed between two
> >>>>>>>   apps).
> >>>>>>> 
> >>>>>>> Being able to gracefully restart very large apps is desirable
> >>>>>>> -
> >>>>>>> if you can tolerate both being up at the same time then
> >>>>>>> having
> >>>>>>> a new copy of your app, and an old copy of you app (both
> >>>>>>> scaled)
> >>>>>>> that is load balanced by your own balancer.  You can then
> >>>>>>> control the scale up down in a scripted fashion.
> >>>>>>> 
> >>>>>>> On Jan 14, 2013, at 6:21 PM, William DeCoste
> >>>>>>> <wdecoste redhat com> wrote:
> >>>>>>> 
> >>>>>>>   Hi Brian,
> >>>>>>> 
> >>>>>>>   What mechanism would you use to control the bleed from the
> >>>>>>>   old to the new application? This component would most
> >>>>>>>   likely be out of the scope of OpenShift. You would be able
> >>>>>>>   to
> >>>>>>>   deploy both versions of the application in OpenShift side
> >>>>>>>   by
> >>>>>>>   side.
> >>>>>>> 
> >>>>>>>   What type of cartridge is the application? Would you be
> >>>>>>>   using OpenShift's Jenkins cartridge to manage the builds
> >>>>>>>   and deployments or an external Jenkins?
> >>>>>>> 
> >>>>>>>   Thanks -Bill
> >>>>>>> 
> >>>>>>> 
> >>>>>>>   On 1/14/13 12:57 PM, Brian DeCamp wrote:
> >>>>>>>   Hi,
> >>>>>>> I originally posted this on the community forum, but Nam
> >>>>>>> suggested I post it on the dev mailing list.
> >>>>>>> 
> >>>>>>> I'd like to use OpenShift Origins or Enterprise to manage a
> >>>>>>> Continuous Deployment pipeline for a large (30MM+ user)
> >>>>>>> application. Our  SOA will also A/B test every release as it
> >>>>>>> goes out, automatically failing a release that does not meet
> >>>>>>> certain thresholds for KPIs. To do this, I need to control
> >>>>>>> the
> >>>>>>> deployment of the new build into production. When a
> >>>>>>> build successfully passes all our tests in Jenkins, I'd like
> >>>>>>> to
> >>>>>>> install the new build into production without removing the
> >>>>>>> old
> >>>>>>> build. Then I'll slowly bleed traffic over to the
> >>>>>>> new version while monitoring our KPIs. If all goes well, all
> >>>>>>> traffic will eventually use the new version and I can remove
> >>>>>>> the old version. If not, I'll redirect all the traffic
> >>>>>>> back to the old version and remove the new build. Traffic
> >>>>>>> routing will be through HA Proxy for web applications, and
> >>>>>>> through a custom message broker for backend services.
> >>>>>>> 
> >>>>>>> Is it possible to do this all with the deployment hooks? If
> >>>>>>> not, is there someplace in the Origins code I can look to
> >>>>>>> scope out the amount of work involved to meet these
> >>>>>>> requirements?
> >>>>>>> 
> >>>>>>> Brian
> >>>>>>> 
> >>>>>>> 
> >>>>>>> 
> >>>>>>> _______________________________________________
> >>>>>>> dev mailing list
> >>>>>>> dev lists openshift redhat com
> >>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>>> 
> >>>>>>> 
> >>>>>>> --
> >>>>>>> Bill DeCoste
> >>>>>>> Principal Software Engineer, Red Hat
> >>>>>>> 978-204-0920
> >>>>>>> wdecoste redhat com
> >>>>>>> 
> >>>>>>>   _______________________________________________
> >>>>>>>   dev mailing list
> >>>>>>>   dev lists openshift redhat com
> >>>>>>>   http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>>> 
> >>>>>>> 
> >>>>>>> --
> >>>>>>> Bill DeCoste
> >>>>>>> Principal Software Engineer, Red Hat
> >>>>>>> 978-204-0920
> >>>>>>> wdecoste redhat com
> >>>>> 
> >>> 
> >>> 
> >>> 
> >>> ------------------------------
> >>> 
> >>> Message: 4
> >>> Date: Mon, 14 Jan 2013 19:03:06 -0600 (CST)
> >>> From: Mike McGrath <mmcgrath redhat com>
> >>> To: Clayton Coleman <ccoleman redhat com>
> >>> Cc: "dev lists openshift redhat com"
> >>> <dev lists openshift redhat com>
> >>> Subject: Re: Continuous Deployment
> >>> Message-ID: <alpine LFD 2 03 1301141901480 2603 redhat com>
> >>> Content-Type: TEXT/PLAIN; charset=US-ASCII
> >>> 
> >>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>> 
> >>>> On Jan 14, 2013, at 7:17 PM, Mike McGrath <mmcgrath redhat com>
> >>>> wrote:
> >>>> 
> >>>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>>> 
> >>>>>> Just floating ideas, but there's nothing fundamental that
> >>>>>> would
> >>>>>> prevent someone putting together a mod_cluster cartridge that
> >>>>>> supplanted haproxy, except that there is some broker work
> >>>>>> required to allow the config (although direct descriptor
> >>>>>> creation is still allowed, I thought)?
> >>>>>> 
> >>>>>> I ask because integrated ecosystem scaling might very well
> >>>>>> offer
> >>>>>> features haproxy can't - so in a happy ecosystem mod cluster
> >>>>>> might coexist with haproxy.  I guess you could also deploy a
> >>>>>> server cart that contains just mod cluster as well, as long as
> >>>>>> it could execute scale up events.
> >>>>> 
> >>>>> Correct, this goes back to that balance between a consistent
> >>>>> experience
> >>>>> between cartridges and what the users expect.  mod_cluster may
> >>>>> very well
> >>>>> be a better experience for the jboss users.
> >>>> 
> >>>> Is the hardest part of creating a custom cart for something like
> >>>> this telling Openshift to route the node proxy traffic to the
> >>>> cart as the primary?  I know the new broker code looks for git
> >>>> host, but is it truly flexible enough that I could write a cart
> >>>> today that when installed would get traffic?
> >>>> 
> >>>> Maybe that's partly a question for Dan or Krishna and the new
> >>>> model refactor.
> >>>> 
> >>> 
> >>> If it speaks http, yeah.  It's pretty 'simple/dumb' in that way.
> >>> We've
> >>> sort of gone the "configure things to work most of the time" but
> >>> now that
> >>> we're there we'll start getting more requests like this for
> >>> better
> >>> polish
> >>> (like bleeding of connections, not using sticky sessions, etc)
> >>> 
> >>> 	-Mike
> >>> 
> >>>>> 
> >>>>>  -Mike
> >>>>> 
> >>>>>> On Jan 14, 2013, at 7:02 PM, Mike McGrath
> >>>>>> <mmcgrath redhat com>
> >>>>>> wrote:
> >>>>>> 
> >>>>>>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>>>>> 
> >>>>>>>> What are the technical challenges to adding mod_cluster
> >>>>>>>> support to Openshift?
> >>>>>>> 
> >>>>>>> Just that it only works with jboss, there's always been a
> >>>>>>> plan
> >>>>>>> in place to
> >>>>>>> do connection bleeding but it's not really come up until now.
> >>>>>>> I think
> >>>>>>> Matt's been a big proponent since the start.
> >>>>>>> 
> >>>>>>> In theory we can do this with haproxy but we've not spent any
> >>>>>>> time doing
> >>>>>>> so yet.
> >>>>>>> 
> >>>>>>> -Mike
> >>>>>>> 
> >>>>>>>> On Jan 14, 2013, at 6:45 PM, William DeCoste
> >>>>>>>> <wdecoste redhat com> wrote:
> >>>>>>>> 
> >>>>>>>>   mod_cluster used with the JBoss AS/EAP domain model
> >>>>>>>>   provides exactly this capability but neither is available
> >>>>>>>>   in OpenShift yet although they are on the roadmap.
> >>>>>>>> 
> >>>>>>>>   On 1/14/13 3:36 PM, Clayton Coleman wrote:
> >>>>>>>>   And as a side note rams email the other day floated a very
> >>>>>>>>   similar concept (a balancer that can bleed between two
> >>>>>>>>   apps).
> >>>>>>>> 
> >>>>>>>> Being able to gracefully restart very large apps is
> >>>>>>>> desirable
> >>>>>>>> - if you can tolerate both being up at the same time then
> >>>>>>>> having a new copy of your app, and an old copy of you app
> >>>>>>>> (both scaled)
> >>>>>>>> that is load balanced by your own balancer.  You can then
> >>>>>>>> control the scale up down in a scripted fashion.
> >>>>>>>> 
> >>>>>>>> On Jan 14, 2013, at 6:21 PM, William DeCoste
> >>>>>>>> <wdecoste redhat com> wrote:
> >>>>>>>> 
> >>>>>>>>   Hi Brian,
> >>>>>>>> 
> >>>>>>>>   What mechanism would you use to control the bleed from the
> >>>>>>>>   old to the new application? This component would most
> >>>>>>>>   likely be out of the scope of OpenShift. You would be able
> >>>>>>>>   to
> >>>>>>>>   deploy both versions of the application in OpenShift side
> >>>>>>>>   by side.
> >>>>>>>> 
> >>>>>>>>   What type of cartridge is the application? Would you be
> >>>>>>>>   using OpenShift's Jenkins cartridge to manage the builds
> >>>>>>>>   and deployments or an external Jenkins?
> >>>>>>>> 
> >>>>>>>>   Thanks -Bill
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>>   On 1/14/13 12:57 PM, Brian DeCamp wrote:
> >>>>>>>>   Hi,
> >>>>>>>> I originally posted this on the community forum, but Nam
> >>>>>>>> suggested I post it on the dev mailing list.
> >>>>>>>> 
> >>>>>>>> I'd like to use OpenShift Origins or Enterprise to manage a
> >>>>>>>> Continuous Deployment pipeline for a large (30MM+ user)
> >>>>>>>> application. Our  SOA will also A/B test every release as it
> >>>>>>>> goes out, automatically failing a release that does not meet
> >>>>>>>> certain thresholds for KPIs. To do this, I need to control
> >>>>>>>> the deployment of the new build into production. When a
> >>>>>>>> build successfully passes all our tests in Jenkins, I'd like
> >>>>>>>> to install the new build into production without removing
> >>>>>>>> the
> >>>>>>>> old build. Then I'll slowly bleed traffic over to the
> >>>>>>>> new version while monitoring our KPIs. If all goes well, all
> >>>>>>>> traffic will eventually use the new version and I can remove
> >>>>>>>> the old version. If not, I'll redirect all the traffic
> >>>>>>>> back to the old version and remove the new build. Traffic
> >>>>>>>> routing will be through HA Proxy for web applications, and
> >>>>>>>> through a custom message broker for backend services.
> >>>>>>>> 
> >>>>>>>> Is it possible to do this all with the deployment hooks? If
> >>>>>>>> not, is there someplace in the Origins code I can look to
> >>>>>>>> scope out the amount of work involved to meet these
> >>>>>>>> requirements?
> >>>>>>>> 
> >>>>>>>> Brian
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>> _______________________________________________
> >>>>>>>> dev mailing list
> >>>>>>>> dev lists openshift redhat com
> >>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>> --
> >>>>>>>> Bill DeCoste
> >>>>>>>> Principal Software Engineer, Red Hat
> >>>>>>>> 978-204-0920
> >>>>>>>> wdecoste redhat com
> >>>>>>>> 
> >>>>>>>>   _______________________________________________
> >>>>>>>>   dev mailing list
> >>>>>>>>   dev lists openshift redhat com
> >>>>>>>>   http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>> --
> >>>>>>>> Bill DeCoste
> >>>>>>>> Principal Software Engineer, Red Hat
> >>>>>>>> 978-204-0920
> >>>>>>>> wdecoste redhat com
> >>>>>> 
> >>>> 
> >>> 
> >>> 
> >>> 
> >>> ------------------------------
> >>> 
> >>> _______________________________________________
> >>> dev mailing list
> >>> dev lists openshift redhat com
> >>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>> 
> >>> 
> >>> End of dev Digest, Vol 10, Issue 21
> >>> ***********************************
> >> 
> >> 
> >> _______________________________________________
> >> dev mailing list
> >> dev lists openshift redhat com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >> 
> 
> 
> _______________________________________________
> dev mailing list
> dev lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> 


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]