[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: OSE 2.1: using internal IPs for routing



This looks like a perfect solution! Thanks for the info Erik.

 

Randall Theobald

Common Engineering – Performance

Dell Software Group  |  Office of the CTO

randall_theobald at dell.com  |  RR1-C336

 

From: Erik Gulliksson [mailto:erik gulliksson accedo tv]
Sent: Thursday, December 04, 2014 4:34 AM
To: Theobald, Randall
Cc: users lists openshift redhat com
Subject: Re: OSE 2.1: using internal IPs for routing

 

Hi Randall,

 

If you configure Elastic IPs (as close to public static addresses ac EC2 goes) for all nodes, you can utilize the AWC EC2 DNS split horizon functionality that seems to be enabled for all Public DNS names (ie ec2-x-y-z-42.us-east-1.compute.amazonaws.com). When the Public DNS name is queried from within the VPC, the internal IP will be given as reply; and when it is queried from the general Internet the Public EIP will given as reply. This way you don't have to deal with multiple names for the same host.

 

To take it a step further you can also set the EIP Public DNS name as FQDN hostname on each of OpenShift hosts running in EC2. I remember this being beneficial in some context involving the broker-node communication via ActiveMQ (the exact details I can't recall though). I am using OpenShift Origin, but this is probably relevant for OSE 2.1 as well.

 

Best regards,

Erik Gulliksson

 

 

On 3 December 2014 at 22:14, <Randall_Theobald dell com> wrote:

We have an installation of OSE 2.1 setup for HA in AWS using ELB and nginx reverse proxies on separate VMs (instead of the node httpd reverse proxies). We noticed that a small percentage of the time, public IP traffic that crosses availability zones has a latency of over 1000 ms, when normally response times are under 100 ms. We found that this added latency does not occur when using private IPs of the nodes. We are also concerned with public traffic chargeback from Amazon for traffic that originates from within our topology using the public IPs.

So my questions are:

 

(1)    Is there a way to configure app scaling such that haproxy.cfg is set up to use a node’s internal IP address for remote gears, instead of the public IP?

(2)    If (1) is not possible, is it feasible to setup an AWS topology using ELB, HA proxies, etc., such that OpenShift doesn’t actually know its public IPs (broker.conf, node.conf would only have private IPs)? What would break? Would I lose SSH access to the gears? What if I only needed SSH access from a jump box inside the AWS topology (that could access the internal IPs)? Would anything else be broken?

 

Basically, we’d like to know the proper way to handle routing internally, without needing to go back out through public/external IP addresses.

 

Thanks,

 

Randall Theobald

Common Engineering – Performance

Dell Software Group  |  Office of the CTO

randall_theobald at dell.com  |  RR1-C336

 


_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



 

--

Erik Gulliksson

 

Senior Operations Engineer

 

Heliosgatan 26, 120 30 Stockholm, Sweden

 

:wq!


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]