[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Master and node configuration roadmap





On Mon, Nov 30, 2015 at 9:42 PM, Jordan Liggitt <jliggitt redhat com> wrote:




On Nov 25, 2015, at 5:00 PM, Jason DeTiberus <jdetiber redhat com> wrote:



On Wed, Nov 25, 2015 at 4:18 PM, Clayton Coleman <ccoleman redhat com> wrote:


On Nov 25, 2015, at 2:03 PM, Jordan Liggitt <jliggitt redhat com> wrote:

Quick thoughts, will add more later.

1. At a minimum, we'd need all the stuff under the etcdClientInfo and etcdStorageConfig stanzas (k8s and openshift prefixes, storage versions, etc)

2. Shared config could ease distribution, but could make rolling upgrades harder (harder to update config for one master and cycle it without affecting other masters)

3. I still don't know if we'd want running masters/controllers to respond to config changes dynamically, or only grab config at start. Everything assumes a config snapshot at start now, that would require quite a bit to rework (and I'm not sure I'd want it, for rolling update reasons, etc)


I still haven't heard a reason to switch to dynamic config, so let's continue to assume they are static from start onwards (read once).

I have no objections to this, triggering a rolling restart on config change for masters/controllers is something I do not see being a limitation.
 

4. Are all values in the config homogenous for all masters? Would we need shared config + individual config?
For the masters...  I see the following as potentially needing to be overridden on a per-host basis, though I don't see any reason why we couldn't do that overriding in the local config (as long as the local config overrides are merged with the shared config):
- controllers
- kubernetesMasterConfig (mainly for the masterIP
- any servingInfo block

Merged configs are really hard to reason about… I don't think I want general support for them. 

I'm just trying to think from a user management standpoint...  If I want all of my nodes with the exception of a handful to have a podLimit of 100, with the rest being in a "region" reserved for lower tenancy. It would seem silly to have to set the podLimit on all of the nodes individually.

I think we would want to clearly outline which config items are cluster level (disabledFeatures, corsAllowedOrigins, masterPublicURL, publicURL, imageConfig, etc) vs items that are host level (servingInfo, etcdConfig, nodeIP, nodeName, etc) vs items that would be subject to merging (controllers, kubernetes*Config, etc).

 
Also had the thought that referenced file resolution becomes tricky from etcd. Are the files also in etcd? Resolved locally? Relative to what?

While I don't think we want to force the CA/certs to be in etcd, but having that be an option would greatly simplify making sure the CA files have some redundancy instead of residing on the first master only.
 



I see more cases for overriding on a per-host basis for the nodes, and that would be something that I could see as being something that we would want to signal a node restart for to pick up changes...  Even then I'm not sure we need a dynamic config as much as a way to signal the node to reload the config on request from the cluster.

 



On Mon, Nov 23, 2015 at 2:22 PM, Jason DeTiberus <jdetiber redhat com> wrote:
On Mon, Nov 23, 2015 at 1:59 PM, Clayton Coleman <ccoleman redhat com> wrote:
Jordan can detail more as he has time, but

1. We are looking at moving some config into etcd over the next few months (maybe not for 1.2).  That might bootstrap a default into the cluster and you could separate the unrelated bits.  I don't know whether admins want the ca in etcd though - that complicates some of this, but still probably addresses your core issue.

Agreed on the CA, that is probably best left external. My main goal is to limit the initial bootstrap config needed as much as possible. In my mind the ideal situation is to be able to spin up an atomic host with a few settings configured through cloud-init and be part of a functional OpenShift cluster by the completion of cloud-init.

There will still be some orchestration needed for cluster configuration, but that orchestration (imho) should be limited to initial cluster bootstrapping and other tasks that aren't strictly a part of the host joining a cluster.
 
2. We could potentially allow more "on demand defaulting" as the admin chooses.  Not sure whether that makes it easier in the long run without 1.

Not following exactly what you are suggesting here. Are these defaults that the cluster admin would pre-configure or that the system would use for generic defaults? If the former, then I believe that would help out a bit. For the later, we already do that in some cases, but the templating for it is quite a bit uglier than just handling defaults within openshift_facts.
 

Any other suggestions?

On Nov 23, 2015, at 12:43 PM, Jason DeTiberus <jdetiber redhat com> wrote:

Currently our master and node configurations keep growing and as we start supporting managing larger environments, managing these items across the entire cluster will become more difficult and problematic.

Is there anything on our configuration roadmap to reduce the amount of configuration to bootstrap a host and join a cluster?

Ideally, for masters, I'd like to be able to just specify the following settings:
apiVersion: v1
kind: MasterConfig
etcdClientInfo:
  ca: ca.cart
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
masterClients:
  externalKubernetesKubeConfig: ""
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig

Everything else seems like it does not need to be present for actually starting up the master and connecting to the cluster. It would be nice to be able to be able to set the additional items only once on a per-cluster or per-master basis in a way other than modifying a config file and restarting the master services.

Similarly for nodes, it would be nice to only have to specify the following items:
apiVersion: v1
kind: NodeConfig
masterKubeConfig: node.kubeconfig

and have the rest of the settings be done on a cluster/host basis through the cluster itself.

Thoughts?

--
Jason DeTiberus
_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



--
Jason DeTiberus

_______________________________________________
dev mailing list
dev lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev





--
Jason DeTiberus



--
Jason DeTiberus

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]