[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: questions on haproxy and front proxy layer



Thx Mike for your insights.

I want to get a better understanding on couple of your comments.

On the question 3, I think you are indicating external load balancer routing to the gears directly. So, are you indicating the config information from haproxy is exported to the external load balancer dynamically. And when new gears get created as part of scaling or get moved around in districts I am guessing haproxy configs change automatically. So, in those scenario if I am bypassing apache and haproxy loadbalancer then I still need to monitor and update external balancer dynamically , correct?

In above set up I am assuming haproxy port proxy is still needed for external to internal mapping but can you tell me does it matter if the SSL termination [from external F5 LB say] happens within the app server in a gear? Does port proxy care whether its http or https?


Also dont you think apache mod_proxy does not scale like haproxy and it hurts sitting before haproxy. I think with SSL termination event mpm cannot be used. The worker mpm still falls short of the single event loop of haproxy/nginx I think. How is apache configured for openshift? Preforked or worker mpm?

Its great to know that you guys are thinking on this and would definitely love to hear the solution. Any rough timelines?

Our policies will further complicate the set up. We dont allow non SSL connections in general between nodes. So, haproxy  LB contacting directly to a gear in a separate node over http is a challenge. External load balancer to gear is a plus in this situation where the app server itself can do the SSL termination.


Thx,
--Meghdoot






From: Mike McGrath <mmcgrath redhat com>
To: meghdoot bhattacharya <meghdoot_b yahoo com>
Cc: "dev lists openshift redhat com" <dev lists openshift redhat com>
Sent: Wednesday, January 9, 2013 10:03 AM
Subject: Re: questions on haproxy and front proxy layer

On Tue, 8 Jan 2013, meghdoot bhattacharya wrote:

> Hi,
>      I had a few follow up questions based on my observations running openshift on Fedora 17 with broker/node running in same host [as per Krishna's blog post].
> If you folks can clarify and give your comments as seem fit it would be great. I am looking at deploying it on premise.
>
> 1. Creating multiple scaled apps result in dedicated haproxy load balancer per scaled app. However, both the apps had apache mod_proxy act as the front end proxy against their named virtual host? In this
> fashion the effects of dedicated haproxy is greatly reduced given that we are sharing the traffic in front proxy layer? Or in multi node scenario, would there be only one haproxy loadbalancer  per node?
>

Every application would get its own dedicated haproxy setup for security
and resource purposes.

>
> 2. Haproxy load balancer is using haproxy port proxy to reach to the apps directly running in different gears. So, in that case what is the benefit of running apache in those nodes with a named virtual
> host of the gear other than to bypass haproxy for debug purpose? I messed with the named virtual host name and both haproxy and haproxy stats page worked fine proving apache was not in play. The intent
> is not to run that apache in production unless that node may be also hosting a haproxy load balancer itself. Hmm...
>

To ensure changes to my haproxy balancer do not impact other applications
on restart, via resource constraints, etc.

>
> 3. On premise lets say there is just one domain www.yyy.com and we really may not need dns support. And as in our case we have multi level hardware load balancers already in place. In that set up,
> ideally I want to kill at least one proxy layer, maybe the apache proxy layer? Can you guys suggest how will that setup look? Can instead of one haproxy load balancer in a gear can  there be multiple
> clone haproxy gears and F5 loadbalances to haproxies which then routes to the nodes. These are secondary level F5 that we can kill in future but initially how we can work while keeping them?
>

You can always get a list of the slave gears from the haproxy gears and
bypass two proxy layers by contacting the gears directly.

>
> 4. Following up on the last question if we keep all three proxy layers, F5, front proxy [or middle in this case] and haproxy.... does F5 say load balance between multiple apache(or even nginx) which then
> points to haproxy... In that set up also front proxy might have to use external IP/port of haproxy and use haproxy port proxy to hit the haproxy web balancer gear.
> I guess I am struggling to figure out the best set up in questions 3 and 4. We can modify openshift as necessary but getting some comments will definitely help
>

You're welcome to figure out what works for you in this scenario.  Don't
feel like you have to constrain yourself to just what OpenShift provides
it's a tool and you can use it in the best way for you.

Having said that, the front end balancer / HA haproxy layer is something
that's been requested by several users and parts would be useful in
OpenShift Online.  We're trying to take feedback we've gotten and come up
with a solution that works best for everyone (or possibly provide
different options for people that have different needs).

> 5. Haproxy latest version is now supporting SSL termination. Does that change anything in openshift? So, we dont need an apache front proxy to SSL terminate say...

It doesn't today but we're tracking what will be useful here.

    -Mike

>
>
> Thx,
> --Meghdoot
>
>


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]