[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Logging (aggregated, FluentD)

On 04/19/2017 08:39 AM, Shepp wrote:
Hi Rich,

Thanks. So first should I not be pointing Kibana at the same host as my OSE Web Interface?

You should be pointing Kibana at the same host, but not necessarily the same hostname. For example, I usually have something like this in my /etc/hosts for testing:

10.x.y.z ocp.origin-14.rmeggins.test kibana.origin-14.rmeggins.test

So everything is on the same physical host/IP, but I use https://ocp.origin-14.rmeggins.test:8443 to access the OpenShift console, and use https://kibana.origin-14.rmeggins.test for access to Kibana

If not, how would you suggest to install/reconfigure. I'm in AWS and I don't really follow what you mean by openshift.deployment.subdomain. Would that be another instance in AWS?

No, not necessarily.

Re: OSE version - yes I'm running OSE 3.4/Kube 1.4. Here's the output of oc version:

[root ip-172-31-45-158 ~]# oc version

oc v3.4.1.12

kubernetes v1.4.0+776c994

features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://ip-172-31-45-158.us-east-2.compute.internal:8443

openshift v3.4.1.12

kubernetes v1.4.0+776c994


Here's the fluentD conf:

[root ip-172-31-45-158 ~]# oc project logging

Now using project "logging" on server "https://ip-172-31-45-158.us-east-2.compute.internal:8443";.

[root ip-172-31-45-158 ~]# oc get configmap logging-fluentd -o yaml

apiVersion: v1


  fluent.conf: |

    # This file is the fluentd configuration entrypoint. Edit with care.

    @include configs.d/openshift/system.conf

# In each section below, pre- and post- includes don't include anything initially;

    # they exist to enable future additions to openshift conf as needed.

    ## sources

    ## ordered so that syslog always runs last...

    @include configs.d/openshift/input-pre-*.conf

    @include configs.d/dynamic/input-docker-*.conf

    @include configs.d/dynamic/input-syslog-*.conf

    @include configs.d/openshift/input-post-*.conf


    <label @INGRESS>

    ## filters

      @include configs.d/openshift/filter-pre-*.conf

      @include configs.d/openshift/filter-retag-journal.conf

      @include configs.d/openshift/filter-k8s-meta.conf

      @include configs.d/openshift/filter-kibana-transform.conf

      @include configs.d/openshift/filter-k8s-record-transform.conf

      @include configs.d/openshift/filter-syslog-record-transform.conf

      @include configs.d/openshift/filter-common-data-model.conf

You do have the common data model filter, so I don't think that is the problem.

      @include configs.d/openshift/filter-post-*.conf


    ## matches

      @include configs.d/openshift/output-pre-*.conf

      @include configs.d/openshift/output-operations.conf

      @include configs.d/openshift/output-applications.conf

      # no post - applications.conf matches everything left



secure-forward.conf: |

    # @type secure_forward

    # self_hostname ${HOSTNAME}

    # shared_key <SECRET_STRING>

    # secure yes

    # enable_strict_verification yes

    # ca_cert_path /etc/fluent/keys/your_ca_cert

    # ca_private_key_path /etc/fluent/keys/your_private_key

      # for private CA secret key

    # ca_private_key_passphrase passphrase

    # <server>

      # or IP

    #   host server.fqdn.example.com <http://server.fqdn.example.com>

    #   port 24284

    # </server>

    # <server>

      # ip address to connect

    #   host

# specify hostlabel for FQDN verification if ipaddress is used for host

    #   hostlabel server.fqdn.example.com <http://server.fqdn.example.com>

    # </server>

throttle-config.yaml: |

    # Logging example fluentd throttling config file


    # read_lines_limit: 10



    # read_lines_limit: 100

kind: ConfigMap


creationTimestamp: 2017-04-12T17:20:48Z


    logging-infra: support

  name: logging-fluentd

  namespace: logging

  resourceVersion: "188321"

  selfLink: /api/v1/namespaces/logging/configmaps/logging-fluentd

  uid: 5ee29731-1fa4-11e7-b524-0a7a32c48dc3


I'm happy to give you access to my environment if that would help.

Sure.  At this point I have no idea what's wrong.

On Tue, Apr 18, 2017 at 4:47 PM, Rich Megginson <rmeggins redhat com <mailto:rmeggins redhat com>> wrote:

    On 04/18/2017 01:51 PM, Shepp wrote:


        I've posted over on the FluentD Google Groups but was directed

        I've got an OpenShift test lab in AWS, all the Aggregated
        Logging PODs are deployed and running, and I believe I've also
        configured the OpenShift Master Node master-config.yaml to
        display the FluentD aggregated logs in the OpenShift Web
        Interface, however I don't see the aggregated logs for any of
        our running images.  I don't think the logs are throwing any
        kind of severe error that I can tell is causing this to not
        work.  What I think might be a problem is in the KIBANA
        'ROUTES' in OpenShift.  I've attached a .png of the routes,
        one thing I can say is the logging-kibana route looks close to
        right but when I try to open that URL there's nothing there at
        that URL. In this setup port 8443 is open, not 443 (if that is
        what Kibana is trying to use), and the OpenShift console is at
        URL/console, yet that and the port seems to have been ignored?

    The way Routes work in OpenShift is that the hostname is used as
    the discriminator for the route.  I'm not sure if you can use the
    same hostname for both the OpenShift console and Kibana even
    though they use different ports.  Usually, Kibana is deployed at
    kibana.{{ openshift deployment subdomain }} and you will have to
    set up either DNS to have the correct IP address for kibana.{{
    openshift deployment subdomain }} or just hack your /etc/hosts
    file.  The Route listens at host on port 443 (https).  It will
    redirect requests to the logging-kibana Service, which may be on a
    different port.

        Also the second ROUTE kibna-ops.example.com
        <http://kibna-ops.example.com> <http://kibna-ops.example.com/>
        - that is obvisouly some sort of default URL, what would this
        need to be set to?

    You can ignore it if you deployed without the separate ops cluster.

        In the I've copied out and zipped each PODs log file. and the
        image of the ROUTES for Kibana. I don't know what else to check.

    2017-04-13 06:30:18 -0400 [warn]: temporarily failed to flush the
    buffer. next_retry=2017-04-13 06:30:19 -0400
    error_class="NoMethodError" error="undefined method `status' for
    nil:NilClass" plugin_id="object:1fe9bb4"
      2017-04-13 06:30:18 -0400 [warn]:
    `rescue in perform_request'
      2017-04-13 06:30:18 -0400 [warn]:
      2017-04-13 06:30:18 -0400 [warn]:
      2017-04-13 06:30:18 -0400 [warn]:

    Are you using 1.4/3.4?  And did you recently upgrade from 1.3/3.3?

    If so, looks like your fluent.conf is incorrect.  Can you provide

    oc get configmap logging-fluentd -o yaml

        Any help would be greatly appreciated.

        Thanks and best wishes,


        Inline image 2

        users mailing list
        users lists openshift redhat com
        <mailto:users lists openshift redhat com>

    users mailing list
    users lists openshift redhat com
    <mailto:users lists openshift redhat com>

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]