[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: openshift-ansible release-3.10 - Install fails with control plane pods



Well I found two options for the inventory

openshift_ip

# host group for masters
[masters]
master openshift_ip=192.168.60.150
# host group for etcd
[etcd]
master openshift_ip=192.168.60.150
# host group for nodes, includes region info
[nodes]
master openshift_node_group_name='node-config-master' openshift_ip=192.168.60.150
infra openshift_node_group_name='node-config-infra' openshift_ip=192.168.60.151
app1 openshift_node_group_name='node-config-compute' openshift_ip=192.168.60.152
app2 openshift_node_group_name='node-config-compute' openshift_ip=192.168.60.153


and flannel

openshift_use_openshift_sdn=false 
openshift_use_flannel=true 
flannel_interface=eth1


The etcd logs are looking good now, still the problem seems that there is no SSL port open

Here are some line I could pull from journalctl on master

Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.720037    6300 certificate_manager.go:216] Certificate rotation is enabled.
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.720453    6300 manager.go:154] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.738257    6300 certificate_manager.go:287] Rotating certificates
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.752531    6300 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://master.vnet.de:8443/apis/certificates.k8s.io/v
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.778490    6300 fs.go:142] Filesystem UUIDs: map[570897ca-e759-4c81-90cf-389da6eee4cc:/dev/vda2 b60e9498-0baa-4d9f-90aa-069048217fee:/dev/dm-0 c39c5bed-f37c-4263-bee8-aeb6a6659d7b:/dev/dm-1]
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.778506    6300 fs.go:143] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:19 fsType:tmpfs blockSize:0} /dev/mapper/VolGroup00-LogVol00:{mountpoint:/var/lib/docker/overlay2 major:253 minor
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.780130    6300 manager.go:227] Machine: {NumCores:1 CpuFrequency:2808000 MemoryCapacity:3974230016 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:6c1357b9e4a54b929e1d09cacf37e
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.783655    6300 manager.go:233] Version: {KernelVersion:3.10.0-862.2.3.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.13.1 DockerAPIVersion:1.26 CadvisorVersion: CadvisorRevision:}
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784251    6300 server.go:621] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784524    6300 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784533    6300 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true C
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784609    6300 container_manager_linux.go:266] Creating device plugin manager: true
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784616    6300 manager.go:102] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784714    6300 state_mem.go:36] [cpumanager] initializing new in-memory state store
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784944    6300 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/origin/openshift.local.volumes/cpu_manager_state"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.784988    6300 server.go:895] Using root directory: /var/lib/origin/openshift.local.volumes
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.785013    6300 kubelet.go:273] Adding pod path: /etc/origin/node/pods
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.785046    6300 file.go:52] Watching path "/etc/origin/node/pods"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.785054    6300 kubelet.go:298] Watching apiserver
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.796651    6300 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://master.vnet.de:8443/api/v1/nodes?fieldSelector=metadata.
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.796695    6300 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://master.vnet.de:8443/api/v1/services?limit=500&resourc
Sep 02 19:17:38 master.vnet.de origin-node[6300]: E0902 19:17:38.796725    6300 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://master.vnet.de:8443/api/v1/pods?fieldSelector=spe
Sep 02 19:17:38 master.vnet.de origin-node[6300]: W0902 19:17:38.798648    6300 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.798664    6300 kubelet.go:562] Hairpin mode set to "hairpin-veth"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: W0902 19:17:38.798724    6300 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.798733    6300 plugins.go:190] Loaded network plugin "cni"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.798773    6300 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.798781    6300 client.go:104] Start docker client with request timeout=2m0s
Sep 02 19:17:38 master.vnet.de origin-node[6300]: W0902 19:17:38.802783    6300 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 19:17:38 master.vnet.de origin-node[6300]: W0902 19:17:38.805078    6300 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.805087    6300 plugins.go:190] Loaded network plugin "cni"
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.805124    6300 docker_service.go:244] Docker cri networking managed by cni
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.811554    6300 docker_service.go:249] Docker Info: &{ID:P7ZL:GD3E:GQPW:BYVC:LJMA:RZOR:LDJO:UYRW:ZWLI:R5R4:CEZS:FRHX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:ove
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.811617    6300 docker_service.go:262] Setting cgroupDriver to systemd
Sep 02 19:17:38 master.vnet.de origin-node[6300]: I0902 19:17:38.811653    6300 kubelet.go:640] Starting the GRPC server for the docker CRI shim.

Am Sonntag, 2. September 2018, 19:25:10 CEST schrieben Sie:
> I might have found something...it could be a Vagrant issue
> 
> Vagrant uses to network interfaces: one for its own ssh access, the other one uses the ip configured in the Vagrantfile.
> HereĀ“s a log from the etcd-pod
> 
> ...
> 2018-09-02 17:15:43.896539 I | etcdserver: published {Name:master.vnet.de ClientURLs:[https://192.168.121.202:2379]} to cluster 6d42105e200fef69
> 2018-09-02 17:15:43.896651 I | embed: ready to serve client requests
> 2018-09-02 17:15:43.897149 I | embed: serving client requests on 192.168.121.202:2379
> 
> 
> The interesting part is, that it is serving on 192.168.121.202, but the ip which should be used is 192.168.60.150.
> 
> [vagrant master ~]$ ip ad 
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host 
>        valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
>     link/ether 52:54:00:87:13:01 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.121.202/24 brd 192.168.121.255 scope global noprefixroute dynamic eth0
>        valid_lft 3387sec preferred_lft 3387sec
>     inet6 fe80::5054:ff:fe87:1301/64 scope link 
>        valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
>     link/ether 5c:a1:ab:1e:00:02 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.60.150/24 brd 192.168.60.255 scope global noprefixroute eth1
>        valid_lft forever preferred_lft forever
>     inet6 fe80::5ea1:abff:fe1e:2/64 scope link 
>        valid_lft forever preferred_lft forever
> 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
>     link/ether 02:42:8b:fa:b7:b0 brd ff:ff:ff:ff:ff:ff
>     inet 172.17.0.1/16 scope global docker0
>        valid_lft forever preferred_lft forever
> 
> 
> Is there any way I can configure my inventory to use a dedicated network-interface (eth1 in my Vagrant case)?
> 
> 
> 








[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]