1. At a minimum, we'd need all the stuff under the etcdClientInfo and etcdStorageConfig stanzas (k8s and openshift prefixes, storage versions, etc)
2. Shared config could ease distribution, but could make rolling upgrades harder (harder to update config for one master and cycle it without affecting other masters)
3. I still don't know if we'd want running masters/controllers to respond to config changes dynamically, or only grab config at start. Everything assumes a config snapshot at start now, that would require quite a bit to rework (and I'm not sure I'd want it, for rolling update reasons, etc)
I still haven't heard a reason to switch to dynamic config, so let's continue to assume they are static from start onwards (read once).
I have no objections to this, triggering a rolling restart on config change for masters/controllers is something I do not see being a limitation.
4. Are all values in the config homogenous for all masters? Would we need shared config + individual config?
For the masters... I see the following as potentially needing to be overridden on a per-host basis, though I don't see any reason why we couldn't do that overriding in the local config (as long as the local config overrides are merged with the shared config):
- kubernetesMasterConfig (mainly for the masterIP
- any servingInfo block
Merged configs are really hard to reason about… I don't think I want general support for them.
Also had the thought that referenced file resolution becomes tricky from etcd. Are the files also in etcd? Resolved locally? Relative to what?
I see more cases for overriding on a per-host basis for the nodes, and that would be something that I could see as being something that we would want to signal a node restart for to pick up changes... Even then I'm not sure we need a dynamic config as much as a way to signal the node to reload the config on request from the cluster.
1. We are looking at moving some config into etcd over the next few months (maybe not for 1.2). That might bootstrap a default into the cluster and you could separate the unrelated bits. I don't know whether admins want the ca in etcd though - that complicates some of this, but still probably addresses your core issue.
Agreed on the CA, that is probably best left external. My main goal is to limit the initial bootstrap config needed as much as possible. In my mind the ideal situation is to be able to spin up an atomic host with a few settings configured through cloud-init and be part of a functional OpenShift cluster by the completion of cloud-init.
There will still be some orchestration needed for cluster configuration, but that orchestration (imho) should be limited to initial cluster bootstrapping and other tasks that aren't strictly a part of the host joining a cluster.
2. We could potentially allow more "on demand defaulting" as the admin chooses. Not sure whether that makes it easier in the long run without 1.
Not following exactly what you are suggesting here. Are these defaults that the cluster admin would pre-configure or that the system would use for generic defaults? If the former, then I believe that would help out a bit. For the later, we already do that in some cases, but the templating for it is quite a bit uglier than just handling defaults within openshift_facts.
Currently our master and node configurations keep growing and as we start supporting managing larger environments, managing these items across the entire cluster will become more difficult and problematic.
Is there anything on our configuration roadmap to reduce the amount of configuration to bootstrap a host and join a cluster?
Ideally, for masters, I'd like to be able to just specify the following settings:
Everything else seems like it does not need to be present for actually starting up the master and connecting to the cluster. It would be nice to be able to be able to set the additional items only once on a per-cluster or per-master basis in a way other than modifying a config file and restarting the master services.
Similarly for nodes, it would be nice to only have to specify the following items:
and have the rest of the settings be done on a cluster/host basis through the cluster itself.