for example i don't want to lose any Operations / OCP related / Worker Nodes related logs going to the embedded ES cluster
you mentioned (all node's logs ) , container logs and service logs. Can you please clarify the differences ?
Thanks Luke, extremely enlightening.Now, can you help list the logs that are actually forwarded by the fluentd pods on worker nodes ? e.g. you mentioned (all node's logs ) , container logs and service logs. Can you please clarify the differences ?Many thanks,On 12 March 2018 at 23:18, Luke Meyer <lmeyer redhat com> wrote:Although you can set up the fluentd instances to send logs either to the integrated storage or an external ES, it will be tricky to do both with the same deployment. They are deployed with a daemonset. What you can do is copy the daemonset and configure both as you like (with different secrets/configmaps), using node selectors and node labels to have the right ones land on the right nodes. However that will direct *all* of the node's logs; I don't think there's an easy way to have the container logs go to one destination and the service logs to another, without more in-depth configuration of fluentd. You do have complete control over its config if you really want though by modifying the configmap.On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat <mols eg gmail com> wrote:______________________________Thanks Aleks for the feedback.This looks promising.We're using Enterprise OCP. Does that make a difference at that level of discussion ?For the External Elasticsearch instance configs you referred to , is it possible to co-exist both ? Some Worker nodes sending logs to the internal ES, and some other Worker nodes sending logs to the external one ?Many Thanks,/MoOn 7 March 2018 at 23:27, Aleksandar Lazic <openshift-dev me2digital com> wrote:Hi.
Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
> Hi All,
> My first question here, so i am hoping at least for some
> acknowledgement !
> * OCP v3.7
Do you use the enterprise version or the opensource one?
> * Several Worker Nodes
> * Few Workload types
> * One Workload, let's call it WorkloadA is planned to have dedicated
> Worker Nodes.
> * for WorkloadA , I'd like to send/route the Container Logs to an
> External EFK / ELK stack other than the one that does get setup
> with OCP
> * For Workload A, an ES cluster does already exist, we would like to
> reuse it.
> * There is an impression that the ES cluster that comes with OCP
> might not necessarily scale if the team operating OCP does not
> size it well
> 1. Has this done before ? Yes / No ? Any comments ?
As you may know is "handle logs in a proper way" not a easy task.
There are some serious questions like the following.
* How long should the logs be preserved
* How much logs are written
* How fast are the logs written
* What's the limit of the network
* What's the limit of the remote es
* and many many more questions
> 1. Is there anyway with the fluentd pods or else to route specific
> Workload / Pods Container logs to an external ES cluster ?
> 2. If not, i'm willing to deploy my own fluentd pods , what do i lose
> by excluding the WorkloadA Worker Nodes to not have the OCP
> fluentd pods ? for example i don't want to lose any Operations /
> OCP related / Worker Nodes related logs going to the embedded ES
> cluster, all i need is to have the Container Logs of WorkloadA to
> another ES cluster.
Have you looked at the following doc part?
est/install_config/aggregate_l ogging.html#sending-logs-to-an -external-elasticsearch-instan ce
tainer-platform/3.7/install_co nfig/aggregate_logging.html#se nding-logs-to-an-external-elas ticsearch-instance
As in the doc described you can send the collected fluentd logs to a
external es cluster.
You can find the source of the openshift logging solution in this repo.
> Looking forward to hearing from you,
dev mailing list
dev lists openshift redhat com