[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Newbie [Centos 7.7] - OKD 3.11: Host network already using 172.16.x.x



Hello,

I'm pretty new to OKD and CentOs and dropped my test Kubernetes cluster. I followed the small HOWTO from: https://github.com/arunvdsharma/openshift-centos.git
This is a small wrapper for the Ansible OKD.

My Test setup:

* 1 x Master
* 2 x Infra
* 2 x Compute


Ansible runs more (or less) without big issues over the nodes and brings the most parts online: except the network.
I have a lot of messages like:

Feb 4 11:34:48 fra-test-node-master origin-node: W0204 11:34:48.663409 33214 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d


but I've found it on many sites, and most of them says .. its pretty normal.

Also:

94a0d2e374db4a71b81896)"), skipping: failed to "StartContainer" for "controllers" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=controllers pod=master-controllers-fra-test-node-master.dev.ina_kube-system(3210f8756194a0d2e374db4a71b81896)

or:

Feb 4 11:36:16 fra-test-node-master origin-node: exec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444 --loglevel=${DEBUG_LOGLEVEL:-2}

Feb 4 11:36:16 fra-test-node-master origin-node: ] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:master-config ReadOnly:false MountPath:/etc/origin/master/ SubPath: MountPropagation:<nil>} {Name:master-cloud-provider ReadOnly:false MountPath:/etc/origin/cloudprovider/ SubPath: MountPropagation:<nil>} {Name:signature-import ReadOnly:false MountPath:/etc/containers/registries.d/ SubPath: MountPropagation:<nil>} {Name:kubelet-plugins ReadOnly:false MountPath:/usr/libexec/kubernetes/kubelet-plugins SubPath: MountPropagation:0xc4221fb160} {Name:master-pki ReadOnly:false MountPath:/etc/pki SubPath: MountPropagation:<nil>} {Name:host-localtime ReadOnly:false MountPath:/etc/localtime SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:8444,Host:,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Feb 4 11:36:16 fra-test-node-master origin-node: I0204 11:36:16.863315 33214 kuberuntime_manager.go:757] checking backoff for container "controllers" in pod "master-controllers-fra-test-node-master.dev.ina_kube-system(3210f8756194a0d2e374db4a71b81896)"

Feb 4 11:36:16 fra-test-node-master origin-node: I0204 11:36:16.863616 33214 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=controllers pod=master-controllers-fra-test-node-master.dev.ina_kube-system(3210f8756194a0d2e374db4a71b81896)


But however ... I think the big problem is, that the host network, where I installed my test cluster, is 172.16.x.x which is the default for OKD. I can't change the company network and also later the production network ..

So the inventory.ini looks like:

[OSEv3:children]
masters
nodes
etcd

[masters]
172.16.0.86

[etcd]
172.16.0.86
172.16.0.74
172.16.0.75
172.16.0.76
172.16.0.77

[nodes]
172.16.0.86 openshift_node_group_name='node-config-master'
172.16.0.74 openshift_node_group_name='node-config-infra'
172.16.0.75 openshift_node_group_name='node-config-infra'
172.16.0.76 openshift_node_group_name='node-config-compute'
172.16.0.77 openshift_node_group_name='node-config-compute'

...

[OSEv3:vars]
...
openshift_public_hostname=console.dev.ina
openshift_master_default_subdomain=apps.dev.ina
...

So my newbie question is: how to get around that or change the addresses, which OKD can use, without getting routing issues ...

I think, I misunderstood the openshift_portal_net param, because after change it ... and redeploy ... I had much more trouble:

openshift_portal_net=192.168.99.0/24


It would be great, of someone could help me :-)

cu denny


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]