[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Help with OS3 and kube proxies



Hi, thanks for your reply. I am using 3 vm on OS1. I suspect I shouldn't have any blockers and network level, being the 3 boxes on the same private lan.

I have tried reading the Vagrant file, I can see the configuration of networking in case of OPENSHIFT_DEV_CLUSTER set, but I don't recognize any special configuration.

Can you elaborate more on what are the operation involved?

--
Paolo Antinori
Software Engineer Middleware - Red Hat
+44 7557 160 164
pantinor redhat com

On 24/10/14 17:32, Clayton Coleman wrote:
In order to do multi-minion setups you need to have a network configuration that allows an IP allocated to a docker container on one host to be reachable on another host.  Mrunal had done some of this work with the OpenShift vagrant setup to make it easier, but it's not automatic.  See the Vagrantfile and the "OPENSHIFT_DEV_CLUSTER" setup - this will define VMs and wire them together.  Are you doing this on physical machines?  If so, you should be also able to replicate that configuration.

We are working upstream and with other folks at Red Hat to make the network configuration that OpenShift/Kube needs more *automatic*, but it's not there quite yet.

----- Original Message -----
Hi guys, I'd like some help to understand better my problem and what I
am possibly doing wrong:

I have deployed a setup of a master os3 node and 2 minions.

My configuration has been validated when I was able, registering a
single minion, to deploy my 2 containers app successfully.

Now I have started playing with a cluster of 2 minions at the same time
and I am facing some problem.

In particular, it happens that each of my containers gets deployed to a
different minion.
My understanding is that kube will provide a proxy on each minion,
exposing my services port, allowing me to use localhost:PORT to access
the service and to not bother if the implementation is local or is
running on a remote minion.

I can clearly see that this is happening, since both my minions shows
both the service ports.

What happens though, is that if I try to use the service behind that
port, simple HTTP, I have a failure if I am on a node that has only the
proxy on not the correspondent docker container deployed locally.

The kind of error that I see is a connection reset:


works fine on M2
# curl -L localhost:5001
"docker-registry server (dev) (v0.8.1)"

but fails on M1
# curl -L localhost:5001
curl: (56) Recv failure: Connection reset by peer

Do you have an idea what could cause this or eventually clarify if I got
totally wrong and I am missing something?

I can add that master is addressing the minions via their dns name, and
that all the nodes in the cluster have a static mapping from dns name to ip.

iptables seems permissive enough to allow communications


Any idea?

thank you


--
Paolo Antinori
Software Engineer Middleware - Red Hat
+44 7557 160 164
pantinor redhat com

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]