[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Adding a node to the cluster without ansible





On Thu, Feb 4, 2016 at 1:10 PM, Clayton Coleman <ccoleman redhat com> wrote:
In general, you should be using ansible to roll out your
configuration, in which case you'd be sure to pick up the config.
Will defer to others who work more with the ansible stuff.

Maintaining the cluster config in Ansible would definitely make the process easier, but we do realize that that will not be the solution for everyone.  The long term goal is to have the tooling (both Ansible-based, and the tooling wrapping Ansible) be able to detect changes made by users and present the user with the choice of overwriting the config (Ansible as the source of record) or Ansible adjusting to the changes and updating the list of stored facts on the host (Host as the source of record).
 

On Thu, Feb 4, 2016 at 1:04 PM, Srinivas Naga Kotaru (skotaru)
<skotaru cisco com> wrote:
> Will ansible will touch existing configuration  and by any chance it will
> overwrite custom config put into ?
>
> Just adding a new node, steps required looks scare me ( both ansible and
> manual). Can we do better job here by automating this task and guaranteed no
> disruption to existing cluster health?
>
> My worry about real prod environments and always uptime guaranteed with
> SLA’s.
>
> --
> Srinivas Kotaru
>
> From: <users-bounces lists openshift redhat com> on behalf of v
> <vekt0r7 gmx net>
> Date: Thursday, February 4, 2016 at 7:51 AM
> Cc: "users lists openshift redhat com" <users lists openshift redhat com>
> Subject: Re: Adding a node to the cluster without ansible
>
> Nice one, scaleup.yml is a very good idea!
>
> origin-sdn-ovs is installed on the node, but it was a 1.0.x node. After the
> update to 1.1 the error is gone. :)
>
> Will create a PR for the instructions.
>
> Am 2016-02-04 um 16:40 schrieb Jason DeTiberus:
>>
>> I would like to add an additional node to the cluster without using
>> ansible.
>> (We have modified our cluster in many ways and don't dare running ansible
>> because it might break our cluster.)
>
>
>  The scale up playbooks take this into account.
>
> They will query the master, generates and distributes the new certificates
> for the new node, and then runs the config playbooks on the new nodes only.
>
> To take advantage of this,  you will need to add a group to your inventory
> called [new_nodes] and configure the hosts as you would for a new install
> under the [nodes] group.
> Then you would run the playbooks/byo/openshift-cluster/scaleup.yml playbook.
>
>
> On Thu, Feb 4, 2016 at 9:55 AM, v <vekt0r7 gmx net> wrote:
>>
>> All right, looks like it works. These are the commands for the master with
>> 3.1:
>>
>> oadm create-api-client-config \
>>       --certificate-authority=/etc/origin/master/ca.crt \
>>       --client-dir=/root/xyz4 \
>>       --master=https://xyz1.eu:8443 \
>>       --signer-cert=/etc/origin/master/ca.crt \
>>       --signer-key=/etc/origin/master/ca.key \
>>       --signer-serial=/etc/origin/master/ca.serial.txt \
>> --groups=system:nodes \
>> --user=system:node:xyz4.eu
>>
>> oadm create-node-config \
>> --node-dir=/root/xyz4 \
>> --node=xyz.eu \
>> --hostnames=xyz4.eu,123.456.0.5 \
>> --certificate-authority /etc/origin/master/ca.crt \
>> --signer-cert /etc/origin/master/ca.crt \
>> --signer-key /etc/origin/master/ca.key \
>> --signer-serial /etc/origin/master/ca.serial.txt \
>> --master=https://xyz1.eu:8443 \
>> --node-client-certificate-authority /etc/origin/master/ca.crt
>>
>>
>> Then I copied all the created files to /etc/origin/node on the new node.
>> Took node-config.yaml from an old, working node, edited the hostnames and
>> used it as node-config.yaml on the new node.
>>
>> It seems to work. The only thing that bugs me is that I'm being spammed
>> with the following error on the new node:
>> manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the
>> status hook for pod 'xy-router-2-imubn' - exit status 1
>> manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the
>> status hook for pod 'ipf-default-1-dp4vc' - exit status 1
>
>
> Do you mind submitting a PR or an issue to the openshift-docs repo for these
> steps? https://github.com/openshift/openshift-docs
>
>>
>>
>> Can anyone tell me if this is something important or whether there are
>> additional steps needed that I have missed?
>
>
> It sounds like you are missing the -sdn-ovs package on the new node host. If
> you are running Origin, then it would be origin-sdn-ovs, otherwise it is
> atomic-enterprise-sdn-ovs.
>
>>
>>
>> Regards,
>> v
>
>
>
> _______________________________________________
> users mailing list
> users lists openshift redhat com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Jason DeTiberus

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]