My Tomcat Container needs data that has to be well protected, i.e. passwords for database access and certificates and keys for Single Sign On to other systems.
-e
or -env-file
to pass secret data to a container but this can be discovered with docker inspect (-env-file
also shows all the properties of the file in docker inspect).
Another approach is to link a data container
mount the secret data into my containers. This would work nicely with test- and production servers having different secrets. But it creates a dependency of the containers to my specific servers.
Update January 2017
Docker 1.13 now has the command docker secret
with docker swarm.
See also "Why is ARG
in a DOCKERFILE
not recommended for passing secrets?".
Original answer (Sept 2015)
The notion of docker vault
, alluded to by Adrian Mouat in his previous answer, was actively discussed in issue 1030 (the discussion continues on issues 13490).
It was for now rejected as being out of scope for docker, but also included:
We've come up with a simple solution to this problem: A bash script that once executed through a single RUN
command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.
Since we do all of this in a single RUN
, nothing gets cached in the image. Here is how it looks in the Dockerfile:
RUN ONVAULT npm install --unsafe-perm
Our first implementation around this concept is available at dockito/vault
.
To develop images locally we use a custom development box that runs the Dockito Vault as a service.
The only drawback is requiring the HTTP server running, so no Docker hub builds.