I wrote a golang application running in each of my docker containers. It communicates with each other using protobufs via tcp and udp and I use Hashicorp\'s memberlist libra
docker stats shows the memory usage stats from cgroups. (Refer: https://docs.docker.com/engine/admin/runmetrics/)
If you read the "outdated but useful" documentation (https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt) it says
5.5 usage_in_bytes
For efficiency, as other kernel components, memory cgroup uses some optimization to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz value for efficient access. (Of course, when necessary, it's synchronized.) If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) value in memory.stat(see 5.2).
Page Cache and RES are included in the memory usage_in_bytes number. So if the container has File I/O, the memory usage stat will increase. However, for a container, if the usage hits that maximum limit, it reclaims some of the memory which is unused. Hence, when I added a memory limit to my container, I could observe that the memory is reclaimed and used when the limit is hit. The container processes are not killed unless there is no memory to reclaim and a OOM error happens. For anyone concerned with the numbers shown in docker stats, the easy way is to check the detailed stats available in cgroups at the path: /sys/fs/cgroup/memory/docker// This shows all the memory metrics in detail in memory.stats or other memory.* files.
If you want to limit the resources used by the docker container in the "docker run" command you can do so by following this reference: https://docs.docker.com/engine/admin/resource_constraints/
Since I am using docker-compose, I did it by adding a line in my docker-compose.yml file under the service I wanted to limit:
mem_limit: 32m
where m stands for megabytes.