This question is part of my continuing exploration of Docker and in some ways follows up on one of my earlier questions. I have now understood how one can get a full applic
@Bryan's answer is solid, particularly in relation to the overheads of a container that just runs one process being low.
That said, you should at least read the arguments at https://phusion.github.io/baseimage-docker/, which makes a case for having containers with multiple processes. Without them, docker is light on provision for:
baseimage-docker runs an init process which fires up a few processes besides the main one in the container.
For some purposes this is a good idea, but also be aware that for instance having a cron daemon and a syslog daemon per container adds up a bit more overhead. I expect that as the docker ecosystem matures we'll see better solutions that don't require this.
A container is basically a process. There is no technical issue with running 500 processes on a decent-sized Linux system, although they will have to share the CPU(s) and memory.
The cost of a container over a process is some extra kernel resources to manage namespaces, file systems and control groups, and some management structures inside the Docker daemon, particularly to handle stdout
and stderr
.
The namespaces are introduced to provide isolation, so that one container does not interfere with any others. If your groups of 5 containers form a unit that does not need this isolation then you can share the network namespace using --net=container
. There is no feature at present to share cgroups, AFAIK.
What is wrong with what you suggest:
stdout
and stderr
will be intermingled for the five processes