I agree with you, and I have tried to
contribute to the doc but that’s wasn’t an easy task so I
Maybe I was also to naïve so blame me that I
have stopped contribution.
@1: Currently that’s not possible you will need
to add for every route the label for the dedicate router.
‘oc create route …’
have no options to set labels you will need to
oc expose service ... --labels='router=one'
or you can use the labels in the webconsole.
Oh and by the way the default router MUST also
have ROUTE_LABELS if you don’t want to expose all routes to
the default router.
@2: you will need the new template from OCP 3.3
there are additional env variables necessary to be able to
use more the none router on the same node.
and you need to add on the router nodes in the
iptables chain ‘OS_FIREWALL_ALLOW’ the additional ports.
@3: This would be a little bit tricky on the
same node due to the fact that the
only handle one config file. Maybe there is a
way with *VIPS but I have never tried this.
Current sharding documentation is very high
level, doesn’t cover step by step actual real world use
Anyway, I was succeeded to create 2 shards. Lot
of questions on this topic on how to proceed next …
How to tell a project that all apps created on
this project should use router #1 or router #2?
Now we have 3 routers (default created as part
of installation + additional 2 routers created). How the
ports work? 80, 443 & 1936 assigned to default router. I
changed ports to 81/444/1937 and 82/445/1938 to respectively
shad #1 #2. These ports open automatically or explicit
Ipfailover (floating VIP) bound to default
router. Do we need to create additional IP failover pods
with different IP’s and match to shad #1 and #2? Or can we
share same IP failover pods with single floating VIP to
newly created shad’s as well?
There are docs here:
On Sat, 24 Sep
2016 at 06:13 Srinivas Naga Kotaru (skotaru) <skotaru cisco com>
Just saw 3.3
rethinking of our cluster design and want to
consolidate 1 cluster per data center. Initially
we were planning off 2 cluster per data center to
server internal and external traffic dedicated to
its own cluster.
to a single cluster per DC will offer multiple
advantages to us. We currently running latest
Sharding is available in 3.2.x branch or need to
wait for 3.3? I was thinking this feature has
been available from 3.x onwards as per
documentation available. Not sure what is mean
for upcoming 3.3.
want to take advantage of this feature and test
ASAP. Current documentation is not clear or
explains only high level.
help me or point to right documentation which
explains step by steps to test this feature?
control routes at project level so that clients
wont modifies to move their routes from prod to
non-prod or internal to external routers?
users mailing list
users lists openshift redhat com