[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Single OpenShift cluster/instance across datacenter?

Thanks Clayton. That helps but still have few questions and need your expert comment

1. What is the impact of etcd non availability in case connectivity between DC broker for few minutes or hours in worst case? Will it impact run time or only provision time?

2. How to deal with code deployment scenarios in case of multi cluster approach? Will they have to deal with individual app by app? 

3. Since each app has few routing exposed URL’s in case of multi cluster setup, putting a global load balancer and forward to open shift routing URL is the right approach? 

Srinivas Kotaru

On 12/15/15, 10:44 AM, "Clayton Coleman" <ccoleman redhat com> wrote:

>On Tue, Dec 15, 2015 at 1:24 PM, Srinivas Naga Kotaru (skotaru)
><skotaru cisco com> wrote:
>> We are running into similar issue similar to below issue.
>> http://stackoverflow.com/questions/34194602/single-kubernetes-openshift-cluster-instance-across-datacenters
>> Pondering whether to go with single cluster span across 3-4 data centers and
>> go with each cluster dedicated to local DC. Each one has its own pros and
>> cons but multiple clusters approach creating more management/operational
>> overhead to both platform and client teams.
>> Why we hating multiple clusters approach?
>> Each cluster has its own API end point. Clients has to use different API end
>> points while working with each DC pods or life cycle management. They might
>> don’t like it
>> While provisioning apps, pods need to be created on each cluster and should
>> tie-up up with another global routing layer
>> If an application has few pods running from each cluster, how they
>> communicate unless they create additional service groups or  other apps
>> talking this app, will they have deal with multiple service and routing
>> groups? Communication with in the app pods or inter application
>> communication is complex.
>> To mask the multiple clusters and API end points, we have to build a uber
>> style a common orchestration, routing and client interface where clients can
>> ignore backend topology and use a common interface for their day to day job.
>> I heard master place also can’t span across data centers, particularly etcd
>> due to its latency requirements, to be co related in same location. Is it a
>> still a problem if data centers are connected with decent network infra?
>Etcd definitely needs low latency between instances.  If you can
>deliver <5ms ping between datacenters, that's not an issue.  If you're
>higher than that, failover and write performance will suffer (in a
>cluster, each write has to be acknowledged by all nodes).  That
>doesn't mean that you can't run an HA setup inside a single data
>center and have nodes in other data centers - you just need to assess
>the impact of losing network between datacenters (what the chance of
>failure is) and what the outcome is in the event of failure.
>I've suggested doing both - run a cluster in each data center for
>things that absolutely must be able to survive multiple datacenter
>losses, and have those apps be deployed to each separate cluster.
>Then run a single cluster (led by one data center) for other use cases
>(general dev use, staging, preprod, etc).

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]