[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't start - Openshift 4.1



Hi all,

I'm trying to deploy logging on Openshift cluster 4.1.21 using the procedure described in the following link https://docs.openshift.com/container-platform/4.1/logging/efk-logging.html.
Everything is going fine but the logging pods don't want to start and stay at pending state.  I have the following error (0/7 nodes are available: 7 node(s) didn't match node selector) for all the 5 logging pods (2 x elasticsearch,  2 x kibana,  1x curator).

The logging pods don't start  with or without nodeSelector in the Cluster-Logging instance.

-----------------------------------------------------------
the Cluster-Logging instance YAML file:
-------
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  creationTimestamp: '2019-11-04T21:20:57Z'
  generation: 37
  name: instance
  namespace: openshift-logging
  resourceVersion: '569806'
  selfLink: >-
    /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
  uid: fdc0e971-ff48-11e9-a3f8-0af5a0903ee4
spec:
  collection:
    logs:
      fluentd:
        nodeSelector:
          kubernetes.io/os: linux
          node-role.kubernetes.io/infra: ''
        resources: null
      rsyslog:
        resources: null
      type: fluentd
  curation:
    curator:
      nodeSelector:
        kubernetes.io/os: linux
        node-role.kubernetes.io/infra: ''
      resources: null
      schedule: 30 3 * * *
    type: curator
  logStore:
    elasticsearch:
      nodeCount: 2
      nodeSelector:
        node-role.kubernetes.io/infra: ''
      redundancyPolicy: SingleRedundancy
      resources:
        requests:
          cpu: 500m
          memory: 4Gi
      storage:
        size: 20G
        storageClassName: gp2
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      nodeSelector:
        kubernetes.io/os: linux
        node-role.kubernetes.io/infra: ''
      proxy:
        resources: null
      replicas: 1
      resources: null
    type: kibana
status:
  collection:
    logs:
      fluentdStatus:
        daemonSet: fluentd
        nodes: {}
        pods:
          failed: []
          notReady: []
          ready: []
      rsyslogStatus:
        Nodes: null
        daemonSet: ''
        pods: null
  curation:
    curatorStatus:
      - clusterCondition:
          curator-1572924600-pwbf8:
            - lastTransitionTime: '2019-11-05T03:30:01Z'
              message: '0/7 nodes are available: 7 node(s) didn''t match node selector.'
              reason: Unschedulable
              status: 'True'
              type: Unschedulable
        cronJobs: curator
        schedules: 30 3 * * *
        suspended: false
  logStore:
    elasticsearchStatus:
      - ShardAllocationEnabled: shard allocation unknown
        cluster:
          numDataNodes: 0
          initializingShards: 0
          numNodes: 0
          activePrimaryShards: 0
          status: cluster health unknown
          pendingTasks: 0
          relocatingShards: 0
          activeShards: 0
          unassignedShards: 0
        clusterName: elasticsearch
        nodeConditions:
          elasticsearch-cdm-wgsf9ygw-1:
            - lastTransitionTime: '2019-11-04T22:33:32Z'
              message: '0/7 nodes are available: 7 node(s) didn''t match node selector.'
              reason: Unschedulable
              status: 'True'
              type: Unschedulable
          elasticsearch-cdm-wgsf9ygw-2:
            - lastTransitionTime: '2019-11-04T22:33:33Z'
              message: '0/7 nodes are available: 7 node(s) didn''t match node selector.'
              reason: Unschedulable
              status: 'True'
              type: Unschedulable
        nodeCount: 2
        pods:
          client:
            failed: []
            notReady:
              - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
              - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
            ready: []
          data:
            failed: []
            notReady:
              - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
              - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
            ready: []
          master:
            failed: []
            notReady:
              - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
              - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
            ready: []
  visualization:
    kibanaStatus:
      - clusterCondition:
          kibana-99dc6bb95-5848h:
            - lastTransitionTime: '2019-11-04T22:00:49Z'
              message: '0/7 nodes are available: 7 node(s) didn''t match node selector.'
              reason: Unschedulable
              status: 'True'
              type: Unschedulable
          kibana-fb96dc875-wk4w5:
            - lastTransitionTime: '2019-11-04T22:33:26Z'
              message: '0/7 nodes are available: 7 node(s) didn''t match node selector.'
              reason: Unschedulable
              status: 'True'
              type: Unschedulable
        deployment: kibana
        pods:
          failed: []
          notReady:
            - kibana-99dc6bb95-5848h
            - kibana-fb96dc875-wk4w5
          ready: []
        replicaSets:
          - kibana-5d77fb4b85
          - kibana-99dc6bb95
          - kibana-fb96dc875
        replicas: 1
-------

The 2 Infra nodes are labeled corrcetly  : node-role.kubernetes.io/infra: ''.
-------------
[mohamed hamouch-capgemini com clientvm 0 ~]$ oc get nodes --show-labels
NAME                                            STATUS   ROLES          AGE   VERSION             LABELS
ip-10-0-130-209.eu-central-1.compute.internal   Ready    master         33h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,kubernetes.io/hostname=ip-10-0-130-209,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-134-187.eu-central-1.compute.internal   Ready    worker         33h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,kubernetes.io/hostname=ip-10-0-134-187,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-141-221.eu-central-1.compute.internal   Ready    infra,worker   31h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,infra=infra,kubernetes.io/hostname=ip-10-0-141-221,node-role.kubernetes.io/infra=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-150-157.eu-central-1.compute.internal   Ready    worker         33h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,kubernetes.io/hostname=ip-10-0-150-157,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-152-34.eu-central-1.compute.internal    Ready    master         33h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,kubernetes.io/hostname=ip-10-0-152-34,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-159-5.eu-central-1.compute.internal     Ready    infra,worker   31h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,infra=infra,kubernetes.io/hostname=ip-10-0-159-5,node-role.kubernetes.io/infra=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
ip-10-0-165-162.eu-central-1.compute.internal   Ready    master         33h   v1.13.4+a80aad556   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c,kubernetes.io/hostname=ip-10-0-165-162,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
-------------

The logging pods (elasticsearch, kibana and curator):
[mohamed hamouch-capgemini com clientvm 0 ~]$ oc get pods --show-labels -n openshift-logging
NAME                                            READY   STATUS    RESTARTS   AGE   LABELS
cluster-logging-operator-bd64d698d-8xzxw        1/1     Running   0          25h   name=cluster-logging-operator,pod-template-hash=bd64d698d
curator-1572924600-pwbf8                        0/1     Pending   0          18h   component=curator,controller-uid=8cc4c661-ff7c-11e9-b9e8-0226c8b0ff44,job-name=curator-1572924600,logging-infra=curator,provider=openshift
elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk   0/2     Pending   0          23h   cluster-name=elasticsearch,component=elasticsearch,es-node-client=true,es-node-data=true,es-node-master=true,node-name=elasticsearch-cdm-wgsf9ygw-1,pod-template-hash=6f49f466dc,tuned.openshift.io/elasticsearch=true
elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph   0/2     Pending   0          23h   cluster-name=elasticsearch,component=elasticsearch,es-node-client=true,es-node-data=true,es-node-master=true,node-name=elasticsearch-cdm-wgsf9ygw-2,pod-template-hash=5777666679,tuned.openshift.io/elasticsearch=true
kibana-99dc6bb95-5848h                          0/2     Pending   0          24h   component=kibana,logging-infra=kibana,pod-template-hash=99dc6bb95,provider=openshift
kibana-fb96dc875-wk4w5                          0/2     Pending   0          23h   component=kibana,logging-infra=kibana,pod-template-hash=fb96dc875,provider=openshift
---------

At what part should I look  to fix this issue?

Thank you very much for your help.

haed98.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]