Logging with Docker and Kubernetes. Logs more than 16k split up

|▌冷眼眸甩不掉的悲伤 提交于 2020-12-27 07:12:23

问题


I am using Docker version 17.12.1-ce Kubernetes verision v1.10.11

My application prints the log in Json format to the console. One of the fields is stackTrace, which can include a huge stackTrace.

The problem is that the log message is split up into two messages. So if I look at the /var/lib/docker/containers/ ... .log I see two messages. I read that this is done for security reasons, but I don't really understand what I can do with that?

Should I cut my stackTrace? Or customize the size? Is this permitted? Is it the correct way to deal with this issue?

p/s I am using json-file logging driver


回答1:


This is an expected behavior. Docker chunks log messages at 16K, because of a 16K buffer for log messages. If a message length exceeds 16K, it should be split by the json file logger and merged at the endpoint.

It does mark the log as a partial message but really depends on the driver/service to re-assemble.

Docker Docs mentions that there are different supported drivers.

With your architecture (Stacktraces), the json-driver might be not a best option.

And I've found this thread on github that adds additional info on topic (as well as a lot of offtop).

Edit.

Logging Architecture says that everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine.

The Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.

Note: The Docker json logging driver treats each line as a separate message. Another peculiarity is that when using the Docker logging driver, there is no direct support for multi-line messages. You need to handle multi-line messages at the logging agent level or higher.

I don't really understand what I can do with that?

It's a limitation on Docker size. Here is another good discussion that ends up with the idea to use filebeat/fluentd .

It looks like the Docker_mode option for Fluentbit might help, but I'm not sure how exactly you are parsing container logs.

Should I cut my stackTrace?

It depends if you need traces in logs or not.

Or customize the size? I have searched for some kind of "knob" to adjust on Docker side, and can't find any as of now.

It looks like the only solution for that is to use some log processing tool that can combine split lines.



来源:https://stackoverflow.com/questions/61360759/logging-with-docker-and-kubernetes-logs-more-than-16k-split-up

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!