[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: aggregating container logs



Glad to hear its working now.  

The 'hostname' column that you're seeing is a little misleading; that is actually being provided by the logs of your Kibana instance.  As the guide currently is, it only is collecting your container logs.
If you do some further configuration of the fluentd configuration file (this will probably require you to create a new Docker image to do this), you can update it to also pull in your node messages.

Alternatively, for 3.1 we will be providing images, a means to deploy them, and some additional features such as reading in node system logs, multi-tenancy, and security.



On Wed, Oct 28, 2015 at 3:04 AM, Lorenz Vanthillo <lorenz vanthillo outlook com> wrote:
One issue: I can only see logs with hostname: kibana-cty0h. I can't find anything when I'm searching for 'root' or for a hostname of another node.

From: lorenz vanthillo outlook com
To: ewolinet redhat com
Subject: RE: aggregating container logs
Date: Wed, 28 Oct 2015 08:49:42 +0100


This are the logs were:
2015-10-28 07:23:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2015-10-28 07:28:06 +0000 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"es-logging\", :port=>9200, :scheme=>\"http\"})! getaddrinfo: Name does not resolve (SocketError)" plugin_id="object:3fa1e01b24cc"
  2015-10-28 07:23:07 +0000 [warn]: suppressed same stacktrace
2015-10-28 07:28:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2015-10-28 07:33:06 +0000 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Can not reach Elasticsearch cluster ({:host=>\"es-logging\", :port=>9200, :scheme=>\"http\"})! getaddrinfo: Name does not resolve (SocketError)" plugin_id="object:3fa1e01b24cc"
  2015-10-28 07:28:07 +0000 [warn]: suppressed same stacktrace

So that's about the issue that was described in the guide. It's fixed now and it's working. Thanks




From: ewolinet redhat com
Date: Tue, 27 Oct 2015 11:16:07 -0500
Subject: Re: aggregating container logs
To: lorenz vanthillo outlook com

Are you seeing anything in your Fluentd and ElasticSearch logs?

On Tue, Oct 27, 2015 at 9:57 AM, Lorenz Vanthillo <lorenz vanthillo outlook com> wrote:
I've just created it also into the default project so everything is there.
But still I get: health status index pri rep docs.count docs.deleted store.size pri.store.size
= Empty
So placing elasticsearch into the other project is probably not the issue.


From: lorenz vanthillo outlook com
To: ewolinet redhat com
Subject: RE: aggregating container logs
Date: Tue, 27 Oct 2015 15:41:14 +0100


I tried what you said. My es-logging-service and es-logging-cluster are in another project.
So I've edited the /var/lib/origin/openshift.local.manifests/pod.yaml on each node.
:  - name: "ES_HOST"
      value: "es-logging.logging.svc.cluster.local"
#the name of my project is logging

I restart everything but:
After getting the IP of the service and doing curl -s 172.30.212.172:9200/_cat/indices?v
I get: health status index pri rep docs.count docs.deleted store.size pri.store.size
Empty


From: ewolinet redhat com
Date: Tue, 27 Oct 2015 09:28:22 -0500
Subject: Re: aggregating container logs
To: lorenz vanthillo outlook com

When you are specifying a podManifestConfig within your node-config.yaml, we are creating static pods.  When we are creating static pods we aren't using the API which means that there is no way to chose a project for the pod, it will be created in default.  Likewise, it also will not make use of a ServiceAccount.

Do you have the manifests for your Kibana and ElasticSearch pods within the podManifestConfig.path directory as well?  Those are not required to be created as static pods; you should be able to create those in any project you choose, and it should be able to connect to your ElasticSeach instance. 
You may need to update the value of ES_HOST within your fluentd manifest to specify the project that your ElasticSearch instance is running in.  Something along the lines of es-logging.<your project>.svc.cluster.local

On Mon, Oct 26, 2015 at 4:22 PM, Lorenz Vanthillo <lorenz vanthillo outlook com> wrote:
Thank you, you were right. Now it's creating in my default project. I don't want that so I changed the service account. Deleted from the default project and created it for my project. But I really can't delete the pods/containers because they're always rebuild.
There is no replication controller. I tried to stop the node. Delete everything, it's not rebuild. I changed the service account, so now it's for my personal project. But when I restart the node, everything is back in my default project.

So how do I have to delete this pods? I've searched in the configs and the 'default' for the service account is the only time that the project name is described.


From: ewolinet redhat com
Date: Mon, 26 Oct 2015 11:28:46 -0500

Subject: Re: aggregating container logs
To: lorenz vanthillo outlook com
CC: users lists openshift redhat com

I'm guessing that elasticpod.yaml is the manifest for your fluentd pod that is in /var/lib/origin/openshift.local.manifests/?  
You actually would not need to `oc create -f elasticpod.yaml` if we have the path set properly  Restart your openshift-node and then check to see that your pod has started up.

On Mon, Oct 26, 2015 at 10:08 AM, Lorenz Vanthillo <lorenz vanthillo outlook com> wrote:
Thanks, So I have manually created /var/lib/origin/openshift.local.manifests.
Do I have to place my elasticpod.yaml in the openshift.local.manifests/ or in the origin/?
I did in the openshift.local.manifests/. After that I tried to execute the file:
# oc create -f elasticpod.yaml/

But I get an error. I thougt is was maybe because the service account wasn't configured properly but it looks good.
[root ip-10-0-0-129 openshift.local.manifests]# oc get sa
NAME        SECRETS   AGE
builder     2         1h
default     2         1h
deployer    2         1h
es-deploy   3         47m


Error from server: error when creating "pod.yaml": Pod "fluentd-elasticsearch" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.containers[0].securityContext.privileged: invalid value 'true', Details: Privileged containers are not allowed provider restricted: .spec.containers[0].securityContext.VolumeMounts: invalid value 'varlog', Details: Host Volumes are not allowed to be used provider restricted: .spec.containers[0].securityContext.VolumeMounts: invalid value 'varlibdockercontainers', Details: Host Volumes are not allowed to be used]



From: ewolinet redhat com
Date: Mon, 26 Oct 2015 09:43:33 -0500
Subject: Re: aggregating container logs
To: lorenz vanthillo outlook com
CC: users lists openshift redhat com


This should just be a path on your system.  
We're specifying this path to tell the Kubelet where to look for our fluentd.yaml file - a little further down in the guide it instructs you to create the yaml file within that directory.


On Mon, Oct 26, 2015 at 9:18 AM, Lorenz Vanthillo <lorenz vanthillo outlook com> wrote:
I'm trying to aggregate container logs using ElasticSearch
I have to specify my 'openshift.local.manifests' in my node-config.yaml
The problem is I can't find it on OpenShift origin. What's the path to this?




_______________________________________________
users mailing list
users lists openshift redhat com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users







--
Eric Wolinetz

Red Hat Engineering | OpenShift
ewolinet redhat com
IRC: ewolinetz
C: (847) 641-0143

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]