[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: questions on haproxy and front proxy layer



On Tue, 8 Jan 2013, meghdoot bhattacharya wrote:

> Hi,
>      I had a few follow up questions based on my observations running openshift on Fedora 17 with broker/node running in same host [as per Krishna's blog post].
> If you folks can clarify and give your comments as seem fit it would be great. I am looking at deploying it on premise.
>
> 1. Creating multiple scaled apps result in dedicated haproxy load balancer per scaled app. However, both the apps had apache mod_proxy act as the front end proxy against their named virtual host? In this
> fashion the effects of dedicated haproxy is greatly reduced given that we are sharing the traffic in front proxy layer? Or in multi node scenario, would there be only one haproxy loadbalancer  per node?
>

Every application would get its own dedicated haproxy setup for security
and resource purposes.

>
> 2. Haproxy load balancer is using haproxy port proxy to reach to the apps directly running in different gears. So, in that case what is the benefit of running apache in those nodes with a named virtual
> host of the gear other than to bypass haproxy for debug purpose? I messed with the named virtual host name and both haproxy and haproxy stats page worked fine proving apache was not in play. The intent
> is not to run that apache in production unless that node may be also hosting a haproxy load balancer itself. Hmm...
>

To ensure changes to my haproxy balancer do not impact other applications
on restart, via resource constraints, etc.

>
> 3. On premise lets say there is just one domain www.yyy.com and we really may not need dns support. And as in our case we have multi level hardware load balancers already in place. In that set up,
> ideally I want to kill at least one proxy layer, maybe the apache proxy layer? Can you guys suggest how will that setup look? Can instead of one haproxy load balancer in a gear can  there be multiple
> clone haproxy gears and F5 loadbalances to haproxies which then routes to the nodes. These are secondary level F5 that we can kill in future but initially how we can work while keeping them?
>

You can always get a list of the slave gears from the haproxy gears and
bypass two proxy layers by contacting the gears directly.

>
> 4. Following up on the last question if we keep all three proxy layers, F5, front proxy [or middle in this case] and haproxy.... does F5 say load balance between multiple apache(or even nginx) which then
> points to haproxy... In that set up also front proxy might have to use external IP/port of haproxy and use haproxy port proxy to hit the haproxy web balancer gear.
> I guess I am struggling to figure out the best set up in questions 3 and 4. We can modify openshift as necessary but getting some comments will definitely help
>

You're welcome to figure out what works for you in this scenario.  Don't
feel like you have to constrain yourself to just what OpenShift provides
it's a tool and you can use it in the best way for you.

Having said that, the front end balancer / HA haproxy layer is something
that's been requested by several users and parts would be useful in
OpenShift Online.  We're trying to take feedback we've gotten and come up
with a solution that works best for everyone (or possibly provide
different options for people that have different needs).

> 5. Haproxy latest version is now supporting SSL termination. Does that change anything in openshift? So, we dont need an apache front proxy to SSL terminate say...

It doesn't today but we're tracking what will be useful here.

	-Mike

>
>
> Thx,
> --Meghdoot
>
>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]