[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: Issue by installing OKD OpenShift 3.11



Hi,

 

Thanks for your help and tips. Yeah, I've forgot this time to remove the htpasswd entries.. ;(

 

After changing the master definition as 'node-config-master-infra' in inventory I've restart the deploy-cluster playbook again.

As you wrote, I've got the master api and etcd information from docker and checked the logs.

 

So, some questions arises:

 

  1. Why this following address is used : 192.168.122.54? This corresponds to the master interface. It's a nat address using dhcp to connected to my pc.
  2. Apparently there're a issue with the etcd access on master: connection refused on 2379.
  3. In the last log, it appears that the request is made on the ip address 0.0.0.0:8444, something is wrong in my config?

 

Here're the ip interfaces list of master; where the 192.168.100.100 is the communication network for openshift as defined in hostname and DNS.

 

Interface list

 

[root master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet
127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:ca:44:c8 brd ff:ff:ff:ff:ff:ff
    inet
192.168.100.100/24 brd 192.168.100.255 scope global noprefixroute ens10
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:feca:44c8/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:a8:8b:00 brd ff:ff:ff:ff:ff:ff
    inet
192.168.122.54/24 brd 192.168.122.255 scope global noprefixroute dynamic eth1
       valid_lft 3090sec preferred_lft 3090sec
    inet6 fe80::c138:7cb0:f8af:7cba/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a9:c9:8d:d3 brd ff:ff:ff:ff:ff:ff
    inet
172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

 

Log from etcd-master

 

 

{"log":"2019-05-24 14:19:57.592591 D | etcdserver/api/v2http: [GET] /health remote:192.168.122.54:44748\n","stream":"stderr","time":"2019-05-24T12:19:57.592680803Z"}
{"log":"2019-05-24 14:20:07.580420 D | etcdserver/api/v2http: [GET] /v2/members remote:
192.168.122.54:45038\n","stream":"stderr","time":"2019-05-24T12:20:07.580688397Z"}
{"log":"2019-05-24 14:20:07.590218 D | etcdserver/api/v2http: [GET] /health remote:
192.168.122.54:45040\n","stream":"stderr","time":"2019-05-24T12:20:07.590356315Z"}
{"log":"2019-05-24 14:20:17.582661 D | etcdserver/api/v2http: [GET] /v2/members remote:
192.168.122.54:45336\n","stream":"stderr","time":"2019-05-24T12:20:17.582774753Z"}
{"log":"2019-05-24 14:20:17.595674 D | etcdserver/api/v2http: [GET] /health remote:
192.168.122.54:45338\n","stream":"stderr","time":"2019-05-24T12:20:17.595844742Z"}
{"log":"2019-05-24 14:20:27.581915 D | etcdserver/api/v2http: [GET] /v2/members remote:
192.168.122.54:45638\n","stream":"stderr","time":"2019-05-24T12:20:27.582036442Z"}
{"log":"2019-05-24 14:20:27.592091 D | etcdserver/api/v2http: [GET] /health remote:
192.168.122.54:45640\n","stream":"stderr","time":"2019-05-24T12:20:27.59225275Z"}
{"log":"2019-05-24 14:20:37.584090 D | etcdserver/api/v2http: [GET] /v2/members remote:
192.168.122.54:45932\n","stream":"stderr","time":"2019-05-24T12:20:37.584291782Z"}
{"log":"2019-05-24 14:20:37.593862 D | etcdserver/api/v2http: [GET] /health remote:
192.168.122.54:45934\n","stream":"stderr","time":"2019-05-24T12:20:37.593980682Z"}

 

Log from api-master

 

{"log":"I0524 14:18:50.016547       1 plugins.go:84] Registered admission plugin \"ResourceQuota\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016617699Z"}
{"log":"I0524 14:18:50.016581       1 plugins.go:84] Registered admission plugin \"PodSecurityPolicy\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016622959Z"}
{"log":"I0524 14:18:50.016622       1 plugins.go:84] Registered admission plugin \"Priority\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016659601Z"}
{"log":"I0524 14:18:50.016662       1 plugins.go:84] Registered admission plugin \"SecurityContextDeny\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01670916Z"}
{"log":"I0524 14:18:50.016713       1 plugins.go:84] Registered admission plugin \"ServiceAccount\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01678609Z"}
{"log":"I0524 14:18:50.016753       1 plugins.go:84] Registered admission plugin \"DefaultStorageClass\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016856209Z"}
{"log":"I0524 14:18:50.016784       1 plugins.go:84] Registered admission plugin \"PersistentVolumeClaimResize\"\n","stream":"stderr","time":"2019-05-24T12:18:50.01686304Z"}
{"log":"I0524 14:18:50.016801       1 plugins.go:84] Registered admission plugin \"StorageObjectInUseProtection\"\n","stream":"stderr","time":"2019-05-24T12:18:50.016865753Z"}
{"log":"F0524 14:19:20.021832       1 start_api.go:68] dial tcp
192.168.100.100:2379: connect: connection refused\n","stream":"stderr","time":"2019-05-24T12:19:20.02217046Z"}

 

Container log

 

[root master controllers]# tail -f 7.log
{"log":"I0524 14:19:13.744728       1 reflector.go:133] Starting reflector *v1.PersistentVolumeClaim (0s) from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.747727009Z"}
{"log":"I0524 14:19:13.744754       1 reflector.go:171] Listing and watching *v1.PersistentVolumeClaim from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.74773138Z"}
{"log":"I0524 14:19:13.745323       1 reflector.go:133] Starting reflector *v1.ReplicationController (0s) from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.747735832Z"}
{"log":"I0524 14:19:13.745340       1 reflector.go:171] Listing and watching *v1.ReplicationController from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.747740084Z"}
{"log":"I0524 14:19:13.745907       1 reflector.go:133] Starting reflector *v1beta1.ReplicaSet (0s) from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.747744229Z"}
{"log":"I0524 14:19:13.745925       1 reflector.go:171] Listing and watching *v1beta1.ReplicaSet from
k8s.io/client-go/informers/factory.go:130\n","stream":"stderr","time":"2019-05-24T12:19:13.747748717Z"}
{"log":"I0524 14:19:13.746647       1 controllermanager.go:128] Version: v1.11.0+d4cacc0\n","stream":"stderr","time":"2019-05-24T12:19:13.747753221Z"}
{"log":"I0524 14:19:13.746697       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-controller-manager...\n","stream":"stderr","time":"2019-05-24T12:19:13.747757701Z"}
{"log":"I0524 14:19:13.746889       1 standalone_apiserver.go:101] Started health checks at
0.0.0.0:8444\n","stream":"stderr","time":"2019-05-24T12:19:13.747761834Z"}
{"log":"F0524 14:19:13.747339       1 standalone_apiserver.go:117] listen tcp4
0.0.0.0:8444: bind: address already in use\n","stream":"stderr","time":"2019-05-24T12:19:13.747765655Z"}

Best regards, J

 

 

From: Samuel Martín Moro <faust64 gmail com>
Sent: Donnerstag, 23. Mai 2019 23:53
To: Jérôme Meyer <Jerome Meyer lcsystems ch>
Cc: users lists openshift redhat com
Subject: Re: Issue by installing OKD OpenShift 3.11

 

Hi,

 

 

As a general rule, you may want to check for the corresponding container health and logs.

 

You won't find any apache or nginx listening. The process serving on :8443 is openshift, it should be started in a container.

Note that the master-api container, in charge of that service, closely rely on another container: etcd. Which is what ansible's waiting for, in your logs.

 

On the master node, use "docker ps" (worst case scenario, "docker ps -a").

Locate your etcd and master-api containers ID (first column).

Then use "docker logs [-f] <container-id>", search for errors.

 

You may find file copies of these logs in /var/log/containers (and /var/log/pods).

 

Let us know how that goes.

 

And try to avoid mailing your htpasswd entries ;)

 

 

Regards.

 

 

 

On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer <Jerome Meyer lcsystems ch> wrote:

Dear Team,

I've encountered some issue to installing openshift (okd 3.11) on 3 vms (1 master and 2 nodes).
I followed the recommendations and procedure as described in docs.
Then I launched the ansible prerequiste playbook without issue, all was fine. But unfortunately the deploy_cluster playbook didn't finished.
Some errors appears when he start the pod.

2019-05-17 16:58:52,157 p=6592 u=root |  FAILED - RETRYING: Wait for control plane pods to appear (2 retries left).
2019-05-17 16:58:57,607 p=6592 u=root |  FAILED - RETRYING: Wait for control plane pods to appear (1 retries left).
2019-05-17 16:59:02,998 p=6592 u=root |  failed: [
master.lab.oshift.edu] (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod master-etcd-master.lab.oshift.edu -o json -n kube-system", "results": [{}], "returncode": 1, "stderr": "The connection to the server master:8443 was refused - did you specify the right host or port?\n", "stdout": ""}}
2019-05-17 16:59:03,531 p=6592 u=root |  FAILED - RETRYING: Wait for control plane pods to appear (60 retries left).
2019-05-17 16:59:08,980 p=6592 u=root |  FAILED - RETRYING: Wait for control plane pods to appear (59 retries left).

Regarding this issue, I've checked the master server and I didn't seen the http port 8443 open or no http/nginx/or whatever service are running, strange.....


DNS server was installed on a vm called services and the dig command was ok.

 

Please let me know if I failed to install something or is the inventory config wrong? what should I do to troubleshoot this problem?

Thanks and best regards, J.

 

 

Here's the inventory file:



# cat inventory/hosts
#####################################################################
#
# HOSTS configuration for our labs
#
# 2019-05-17
#
#####################################################################

[workstation]
workstation.lab.oshift.edu

[masters]
master.lab.oshift.edu

[etcd]
master.lab.oshift.edu

[nodes]
master.lab.oshift.edu openshift_node_group_name="node-config-master"
node1.lab.oshift.edu openshift_node_group_name="node-config-compute"
node2.lab.oshift.edu openshift_node_group_name="node-config-compute"

[nfs]
services.lab.oshift.edu

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
nfs

[OSEv3:vars]
###############################################################################
# Common/ Required configuration variables follow                             #
###############################################################################
# How ansible access hosts
ansible_user=root
ansible_become=true

openshift_deployment_type=origin

openshift_release="3.11"

openshift_master_default_subdomain=
apps.lab.oshift.edu

###############################################################################
# Additional configuration variables follow                                   #
###############################################################################

# DEBUG
debug_level=4

# DISABLE SOME CHECKS
openshift_disable_check=disk_availability,memory_availability,docker_storage

# Enable etcd debug logging, defaults to false
etcd_debug=true
# Set etcd log levels by package
etcd_log_package_levels="etcdserver=WARNING,security=INFO"

# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Defining htpasswd users
openshift_master_htpasswd_users={'admin': '$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer': '$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'}

# Option B - External NFS Host
openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/openshift_storage
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi

# ENABLE FIREWALLD
os_firewall_use_firewalld=true
[root workstation openshift-ansible]#

 

_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


 

--

Samuel Martín Moro
{EPITECH.} 2011

"Nobody wants to say how this works.
 
Maybe nobody knows ..."
                      Xorg.conf(5)

Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]