I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d
with DOCKER_HOST=
?
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/@dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH
v6.7 (run ssh -V
to check), I did not try this out on older versions though.
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user@someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user@someremote
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps
or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
Ctrl+C
in the first terminal.rm -f "$(pwd)"/docker.sock
unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml
files and other resources around or setting environment variables on a remote machine (which is difficult).
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"@"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again