[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Ansible glusterfs & glusterfs_registry run gets stuck reporting "Wait for heketi pod" on CentOS Atomic



Hello Everybody


I'm trying to setup Origin 3.7 featuring glusterfs & glusterfs_registry on 6 CentOS Atomic nodes but every time I've tried my Ansible run got stuck reporting "Wait for heketi pod"

 

At the end of the Ansible run I can see these containers running : 

CONTAINER ID        IMAGE                                                                                                      COMMAND                  CREATED             STATUS              PORTS               NAMES

54583d997e6b        docker.io/gluster/gluster-centos sha256:e3e2881af497bbd76e4d3de90a4359d8167aa8410db2c66196f0b99df6067cb2   "/usr/sbin/init"         34 minutes ago      Up 34 minutes                           k8s_glusterfs_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0

425e9d1b0e0a        openshift/origin-pod:v3.7.0-alpha.1                                                                        "/usr/bin/pod"           34 minutes ago      Up 34 minutes                           k8s_POD_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0

9f41e1b56784        openshift/node:latest                                                                                      "/usr/local/bin/origi"   About an hour ago   Up About an hour                        origin-node

cdd713cfa1f6        openshift/openvswitch:latest                                                                               "/usr/local/bin/ovs-r"   About an hour ago   Up About an hour                        openvswitch

c170ceae9902        openshift/origin:latest                                                                                    "/usr/bin/openshift s"   About an hour ago   Up About an hour                        origin-master-controllers

21a7050510f4        openshift/origin:latest                                                                                    "/usr/bin/openshift s"   About an hour ago   Up About an hour                        origin-master-api

016b94eef88f        registry.access.redhat.com/rhel7/etcd                                                                      "/usr/bin/etcd"          About an hour ago   Up About an hour                        etcd_container

 

Actually Glusterfs got created as well as this Heketi file :

[root wfpromshas05 ~]# docker exec -ti -u 0  k8s_glusterfs_glusterfs-storage-zltlb_glusterfs_0a71a6cc-a1d1-11e7-8f00-005056a6cb52_0  ls -l  /var/lib/heketi/mounts/vg_266e9e3ed8ced24c7d22d17162989cfd/brick_f8ccb630bcf42e953e0d0944263891e9/brick/heketi.db

-rw-r--r--. 2 root root 45056 Sep 25 09:21 /var/lib/heketi/mounts/vg_266e9e3ed8ced24c7d22d17162989cfd/brick_f8ccb630bcf42e953e0d0944263891e9/brick/heketi.db

 

This is my Ansibile inventory file, there the field 'openshift_master_htpasswd_users' was omitted on purpose :

##################################################

[OSEv3:children]

masters

nodes

etcd

glusterfs

glusterfs_registry

 

 

[OSEv3:vars]

ansible_ssh_user=root

openshift_deployment_type=origin

openshift_storage_glusterfs_wipe=True

debug_level=4

openshift_release=v3.7

openshift_image_tag=latest

openshift_install_examples=True

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] openshift_disable_check=disk_availability,memory_availability

openshift_clock_enabled=true

os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'

osm_cluster_network_cidr=10.128.0.0/14

osm_host_subnet_length=9

openshift_portal_net=172.30.0.0/16

openshift_metrics_install_metrics=true

openshift_metrics_storage_kind=dynamic

 

 

[masters]

wfpromshas05.global.wfp.org containerized=true openshift_schedulable=True wfpromshas06.global.wfp.org containerized=true openshift_schedulable=True wfpromshas07.global.wfp.org containerized=true openshift_schedulable=True

 

[etcd]

wfpromshas05.global.wfp.org containerized=true openshift_schedulable=True wfpromshas06.global.wfp.org containerized=true openshift_schedulable=True wfpromshas07.global.wfp.org containerized=true openshift_schedulable=True

 

[nodes]

wfpromshas02.global.wfp.org openshift_node_labels="{'region': 'infra'}" containerized=true openshift_schedulable=True wfpromshas03.global.wfp.org openshift_node_labels="{'region': 'infra'}" containerized=true openshift_schedulable=True wfpromshas04.global.wfp.org openshift_node_labels="{'region': 'infra'}" containerized=true openshift_schedulable=True

wfpromshas05.global.wfp.org storage=True                                containerized=true openshift_schedulable=True

wfpromshas06.global.wfp.org storage=True                                containerized=true openshift_schedulable=True

wfpromshas07.global.wfp.org storage=True                                containerized=true openshift_schedulable=True

 

[glusterfs_registry]

wfpromshas02.global.wfp.org glusterfs_devices='[ "/dev/sdb", "/dev/sdc", "/dev/sdd" ]' containerized=True wfpromshas03.global.wfp.org glusterfs_devices='[ "/dev/sdb", "/dev/sdc", "/dev/sdd" ]' containerized=True wfpromshas04.global.wfp.org glusterfs_devices='[ "/dev/sdb", "/dev/sdc", "/dev/sdd" ]' containerized=True

 

[glusterfs]

wfpromshas05.global.wfp.org glusterfs_devices='[ "/dev/sdb" ]' containerized=True wfpromshas06.global.wfp.org glusterfs_devices='[ "/dev/sdb" ]' containerized=True wfpromshas07.global.wfp.org glusterfs_devices='[ "/dev/sdb" ]' containerized=True 

##############################################################

 

Kindly what may I check ? 

 

Thank you very much,

Fabio Martinelli


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]