We have an installation of OSE 2.1 setup for HA in AWS using ELB and nginx reverse proxies on separate VMs (instead of the node httpd reverse proxies). We noticed that a small percentage of the time, public IP traffic that crosses availability zones has a latency of over 1000 ms, when normally response times are under 100 ms. We found that this added latency does not occur when using private IPs of the nodes. We are also concerned with public traffic chargeback from Amazon for traffic that originates from within our topology using the public IPs.
So my questions are:
(1) Is there a way to configure app scaling such that haproxy.cfg is set up to use a node’s internal IP address for remote gears, instead of the public IP?
(2) If (1) is not possible, is it feasible to setup an AWS topology using ELB, HA proxies, etc., such that OpenShift doesn’t actually know its public IPs (broker.conf, node.conf would only have private IPs)? What would break? Would I lose SSH access to the gears? What if I only needed SSH access from a jump box inside the AWS topology (that could access the internal IPs)? Would anything else be broken?
Basically, we’d like to know the proper way to handle routing internally, without needing to go back out through public/external IP addresses.
Common Engineering – Performance
Dell Software Group | Office of the CTO
randall_theobald at dell.com | RR1-C336
users mailing list
users lists openshift redhat com