[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Hybrid Cloud Hostname Issues (AWS, Co-Lo)




On Thu, Aug 25, 2016 at 8:48 AM, Jason DeTiberus <jdetiber redhat com> wrote:

For etcd, the host would  need to be removed from the etcd cluster and care would need to be taken to avoid losing quorum while doing so. Hopefully someone else can provide links to docs on this, but basic gist is that you would use etcdctl to remove the host from the running etcd config.

There's the "etcdctl member" command for this, however I can't find documentation for etcd v2, which is what we're using in origin: https://github.com/coreos/etcd/blob/master/etcdctl/ctlv2/command/member_commands.go

For etcd v3, it's documented here (search for "member remove"): https://github.com/coreos/etcd/tree/master/etcdctl

It would be wise to take caution before running this command, make sure you have backups of your etcd state (taking snapshots of all the VMs in the etcd cluster seems like a good way to do this).

For masters, the remaining masters would need to be updated to update the master count in the config file and there may be an additional update needed to remove the host from the kubernetes service. This might not be needed for 3.3, though iirc.

Andrew, could you provide more context?

Did you mean Andrew Butcher? If so I've copied him; if not, sorry Andrew for the noise :)

 

Frank 
Systems Engineer

On Tue, Aug 16, 2016 at 1:56 AM, Jason DeTiberus <jdetiber redhat com> wrote:


On Mon, Aug 15, 2016 at 12:09 PM, Frank Liauw <frank vsee com> wrote:
Thanks Jordan. 

The Ansible playbooks do not provide a means of setting nodeName when performing Advanced Install or scaleup? I can add new nodes with the appropriate nodeName set and remove the existing ones if possible. 

Setting the openshift_hostname variable should override the nodeName value when using the playbooks, both for install and scaleup.

 

Otherwise, would you care to provide some details on what node client credentials needs to be updated? The documentation revolves largely around assisted setups; didn't manage to find anything on manual node configuration and introduction into an existing cluster (might look into how scaleup works in Ansible for some idea). 

I'm cc'ing Andrew Butcher, work that he has been doing recently should allow for you to replace the certificates/kubeconfigs that are used.
 

Frank 
Systems Engineer

On Mon, Aug 15, 2016 at 11:32 PM, Jordan Liggitt <jliggitt redhat com> wrote:
Node names are immutable in the API. Changing the node name would require updating the node client credentials, the node would register another node with the API when it started, and any pods scheduled to the old node name would get evicted once the old node object's status got stale enough. 



On Aug 15, 2016, at 11:25 AM, Frank Liauw <frank vsee com> wrote:

Thanks Jason. 

Can I update nodeName in config.yaml and restart the EC2 nodes? Will that update the metadata of my nodes automatically across the entire cluster? 

Frank 
Systems Engineer

On Mon, Aug 15, 2016 at 11:10 PM, Jason DeTiberus <jdetiber redhat com> wrote:


On Mon, Aug 15, 2016 at 4:17 AM, Frank Liauw <frank vsee com> wrote:
Hi All, 

I have a 5 node Openshift cluster split across 2 AZs; our colocation center and AWS, with a master in each AZ and the rest being nodes. 

We setup our cluster with the Ansible script, and somewhere during the setup, the EC2 instance's private hostname were picked up and registered as node names of the nodes in AWS, which is a bit annoying as that deviates from our hostname conventions and is rather difficult to read, and it's not something that can be changed post setup. 

It didn't help that parts of the admin operations seem to be using the EC2 instance's private hostname, so I get errors like this: 


Scheduling system related pods on the AWS instances works (router, fluentd), though any build pods that lands up on EC2s never gets built, and just eventually times out; my suspicion is that the build process monitors depends on the hostname which can't be reached from our colocation center master (which we use as a primary), and hence breaks. 

I'm unable to find much detail on this behaviour. 

1. Can we manually change the hostname of certain nodes? 

The nodeName value overrides this, however if you are relying on cloud provider integration there are limitations, see below.
 

2. How do we avoid registering EC2 nodes with their private hostnames? 

f you are willing to give up the native cloud provider integration (ability to leverage EBS volumes as PVs), then you can override this using the openshift_hostname variable when installing the cluster. At least as of Kubernetes/Origin 1.2, the nodeName value in the node config needed to match the private dns name of the host.

--
Jason DeTiberus

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Jason DeTiberus




--
Jason DeTiberus

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter (@jawnsy) is the quickest way to my heart

“A master in the art of living draws no sharp distinction between his work and his play; his labor and his leisure; his mind and his body; his education and his recreation. He hardly knows which is which. He simply pursues his vision of excellence through whatever he is doing, and leaves others to determine whether he is working or playing. To himself, he always appears to be doing both.” — L. P. Jacks, Education through Recreation (1932), p. 1

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]