[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: split long log records



Understood. Thanks Peter


On 2017-06-19 17:48, Peter Portante wrote:
Hi Andre,

This is a hard-coded Docker size.  For background see:

  * https://bugzilla.redhat.com/show_bug.cgi?id=1422008, "[RFE] Fluentd
handling of long log lines (> 16KB) split by Docker and indexed into
several ES documents"
    * And the reason for the original 16 KB limit:
https://bugzilla.redhat.com/show_bug.cgi?id=1335951, "heavy logging
leads to Docker daemon OOM-ing"

The processor that reads the json-file documents for sending to
graylog needs to be endowed with the smarts to handle reconstruction
of those log lines, most likley, obviously with some other upper bound
(as a container is not required to emit newlines in stdout or stderr.

Regards,

-peter

On Mon, Jun 19, 2017 at 11:43 AM, Andre Esser
<andre esser voidbridge com> wrote:
We use Graylog for log visualisation. However that's not the culprit it
turns out. Log entries in the pod's log file are already split into chunks
of 16KB like this:

{"log":"The quick brown[...]jumps ov","stream":"stdout",\
"time":"2017-06-19T15:27:33.130524954Z"}

{"log":"er the lazy dog.\n","stream":"stdout",\
"time":"2017-06-19T15:27:33.130636562Z"}

So, to cut a long story short, is there any way to increase the size limit
before a log record gets split into two JSON records?



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]