[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Continuous Deployment



On Mon, 14 Jan 2013, Clayton Coleman wrote:

> On Jan 14, 2013, at 7:17 PM, Mike McGrath <mmcgrath redhat com> wrote:
>
> > On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >
> >> Just floating ideas, but there's nothing fundamental that would prevent someone putting together a mod_cluster cartridge that supplanted haproxy, except that there is some broker work required to allow the config (although direct descriptor creation is still allowed, I thought)?
> >>
> >> I ask because integrated ecosystem scaling might very well offer features haproxy can't - so in a happy ecosystem mod cluster might coexist with haproxy.  I guess you could also deploy a server cart that contains just mod cluster as well, as long as it could execute scale up events.
> >
> > Correct, this goes back to that balance between a consistent experience
> > between cartridges and what the users expect.  mod_cluster may very well
> > be a better experience for the jboss users.
>
> Is the hardest part of creating a custom cart for something like this telling Openshift to route the node proxy traffic to the cart as the primary?  I know the new broker code looks for git host, but is it truly flexible enough that I could write a cart today that when installed would get traffic?
>
> Maybe that's partly a question for Dan or Krishna and the new model refactor.
>

If it speaks http, yeah.  It's pretty 'simple/dumb' in that way.  We've
sort of gone the "configure things to work most of the time" but now that
we're there we'll start getting more requests like this for better polish
(like bleeding of connections, not using sticky sessions, etc)

	-Mike

> >
> >    -Mike
> >
> >> On Jan 14, 2013, at 7:02 PM, Mike McGrath <mmcgrath redhat com> wrote:
> >>
> >>> On Mon, 14 Jan 2013, Clayton Coleman wrote:
> >>>
> >>>> What are the technical challenges to adding mod_cluster support to Openshift?
> >>>
> >>> Just that it only works with jboss, there's always been a plan in place to
> >>> do connection bleeding but it's not really come up until now.  I think
> >>> Matt's been a big proponent since the start.
> >>>
> >>> In theory we can do this with haproxy but we've not spent any time doing
> >>> so yet.
> >>>
> >>>   -Mike
> >>>
> >>>> On Jan 14, 2013, at 6:45 PM, William DeCoste <wdecoste redhat com> wrote:
> >>>>
> >>>>     mod_cluster used with the JBoss AS/EAP domain model provides exactly this capability but neither is available in OpenShift yet although they are on the roadmap.
> >>>>
> >>>>     On 1/14/13 3:36 PM, Clayton Coleman wrote:
> >>>>     And as a side note rams email the other day floated a very similar concept (a balancer that can bleed between two apps).
> >>>>
> >>>> Being able to gracefully restart very large apps is desirable - if you can tolerate both being up at the same time then having a new copy of your app, and an old copy of you app (both scaled)
> >>>> that is load balanced by your own balancer.  You can then control the scale up down in a scripted fashion.
> >>>>
> >>>> On Jan 14, 2013, at 6:21 PM, William DeCoste <wdecoste redhat com> wrote:
> >>>>
> >>>>     Hi Brian,
> >>>>
> >>>>     What mechanism would you use to control the bleed from the old to the new application? This component would most likely be out of the scope of OpenShift. You would be able to
> >>>>     deploy both versions of the application in OpenShift side by side.
> >>>>
> >>>>     What type of cartridge is the application? Would you be using OpenShift's Jenkins cartridge to manage the builds and deployments or an external Jenkins?
> >>>>
> >>>>     Thanks -Bill
> >>>>
> >>>>
> >>>>     On 1/14/13 12:57 PM, Brian DeCamp wrote:
> >>>>     Hi,
> >>>> I originally posted this on the community forum, but Nam suggested I post it on the dev mailing list.
> >>>>
> >>>> I'd like to use OpenShift Origins or Enterprise to manage a Continuous Deployment pipeline for a large (30MM+ user) application. Our  SOA will also A/B test every release as it
> >>>> goes out, automatically failing a release that does not meet certain thresholds for KPIs. To do this, I need to control the deployment of the new build into production. When a
> >>>> build successfully passes all our tests in Jenkins, I'd like to install the new build into production without removing the old build. Then I'll slowly bleed traffic over to the
> >>>> new version while monitoring our KPIs. If all goes well, all traffic will eventually use the new version and I can remove the old version. If not, I'll redirect all the traffic
> >>>> back to the old version and remove the new build. Traffic routing will be through HA Proxy for web applications, and through a custom message broker for backend services.
> >>>>
> >>>> Is it possible to do this all with the deployment hooks? If not, is there someplace in the Origins code I can look to scope out the amount of work involved to meet these
> >>>> requirements?
> >>>>
> >>>> Brian
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> dev mailing list
> >>>> dev lists openshift redhat com
> >>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>
> >>>>
> >>>> --
> >>>> Bill DeCoste
> >>>> Principal Software Engineer, Red Hat
> >>>> 978-204-0920
> >>>> wdecoste redhat com
> >>>>
> >>>>     _______________________________________________
> >>>>     dev mailing list
> >>>>     dev lists openshift redhat com
> >>>>     http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >>>>
> >>>>
> >>>> --
> >>>> Bill DeCoste
> >>>> Principal Software Engineer, Red Hat
> >>>> 978-204-0920
> >>>> wdecoste redhat com
> >>
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]