|Hi Meghdoot, |
Comments/answers inline tagged RR.
Also on a related note, we are in very early stages of drafting up a proposal (PEP) for the routing and scaling pieces, which would
include our next potential steps and we would welcome any comments/feedback once its out there.
On Jan 9, 2013, at 1:00 PM, meghdoot bhattacharya wrote:
modify your external load balancer appropriately. And yes, when gears get added/removed (as part of scaling) + on gear moves, the haproxy config is
RR: The port proxy (also runs haproxy but I'd rather call it the port proxy to avoid confusion) is just a plain ole tcp proxy - it basically
routes to your application internal ip/port (from the "externally" exposed port).
And the port proxy doesn't really care whether its http[s] traffic - a "passthrough-filter"!!
Just to clarify things, in the existing infrastructure/code today we don't do the SSL termination inside the app server (on the gear).
Rather we do our SSL termination at the FrontEnd proxy layer (apache w/ virtual hosts) which in turn proxies the request to the gears.
So today, that is the haproxy server running in the gear for a scaled application and in the case of a non-scaled apps its just the app server running
in that gear.
RR: Its preforked. And yes that is one of the issues we have today with scale amongst others (websockets not being the least).
Now as re: websockets, we do currently have an experimental event loop driven proxy server (Node.js based) - ports 8000/8443. And that does
definitely scale better - but its a solution we are field-testing right now.
But vis-a-vis using haproxy in front of apache, that's not a viable solution for a variety of reasons as Mike mentioned - security + resource purposes +
dynamic reloading of routes/downtime - we don't want an app that's scaling to affect traffic/routing to other apps/gears on that node.
That's the rationale for "containing" the haproxy router within the scope of the gear.
And as mentioned earlier, we are really early stages of drafting up a proposal for the routing/scaling bits.
RR: Hmm, that might be an issue - couple of reasons - one, as mentioned above is that we do the SSL termination on the front-end proxy
layer (Apache) and not at the app server. That could however be solved with running the backend content server (app server) w/ SSL
termination and proxying https.
The bigger issue however is that the inter-node communication is not just restricted to http[s], it could well be another service
which is not really secure on-the-wire. For example, a gear running a service like mongo/mysql/postgres/memcached etc that needs to be accessed
remotely by the application server.
rmillner mentioned IPSec as a possible solution you can use on this front - which is really neat as that should work generically. HTH.