Without a Docker container, it is straightforward to run an X11 program on a remote server using the SSH X11 forwarding (ssh -X). I have tried to get the sa
In my case, I sit at "remote" and connect to a "docker_container" on "docker_host":
remote --> docker_host --> docker_container
To make debugging scripts easier with VScode, I installed SSHD into the "docker_container", reporting on port 22, mapped to another port (say 1234) on the "docker_host".
So I can connect directly with the running container via ssh (from "remote"):
ssh -Y -p 1234 appuser@docker_host.local
(where appuser
is the username within the "docker_container". I am working on my local subnet now, so I can reference my server via the .local mapping. For external IPs, just make sure your router is mapped to this port to this machine.)
This creates a connection directly from my "remote" to "docker_container" via ssh.
remote --> (ssh) --> docker_container
Inside the "docker_container", I installed sshd
with
sudo apt-get install openssh-server
(you can add this to your Dockerfile to install at build time).
To allow X11 forwarding to work, edit the /etc/ssh/sshd_config
file as such:
X11Forwarding yes
X11UseLocalhost no
Then restart the ssh within the container. You should do this from shell executed into the container, from the "docker_host", not when you are connected to the "docker_container" via ssh: (docker exec -ti docker_container bash
)
Restart sshd:
sudo service ssh restart
When you connect via ssh to the "docker_container", check the $DISPLAY
environment variable. It should say something like
appuser@3f75a98d67e6:~/data$ echo $DISPLAY
3f75a98d67e6:10.0
Test by executing your favorite X11 graphics program from within "docker_container" via ssh (like cv2.imshow())
I figured it out. When you are connecting to a computer with SSH and using X11 forwarding, /tmp/.X11-unix is not used for the X communication and the part related to $XSOCK is unnecessary.
Any X application rather uses the hostname in $DISPLAY, typically "localhost" and connects using TCP. This is then tunneled back to the SSH client. When using "--net host" for the Docker, "localhost" will be the same for the Docker container as for the Docker host, and therefore it will work fine.
When not specifying "--net host", the Docker is using the default bridge network mode. This means that "localhost" means something else inside the container than for the host, and X applications inside the container will not be able to see the X server by referring to "localhost". So in order to solve this, one would have to replace "localhost" with the actual IP-address of the host. This is usually "172.17.0.1" or similar. Check "ip addr" for the "docker0" interface.
This can be done with a sed replacement:
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
Additionally, the SSH server is commonly not configured to accept remote connections to this X11 tunnel. This must then be changed by editing /etc/ssh/sshd_config (at least in Debian) and setting:
X11UseLocalhost no
and then restart the SSH server, and re-login to the server with "ssh -X".
This is almost it, but there is one complication left. If any firewall is running on the Docker host, the TCP port associated with the X11-tunnel must be opened. The port number is the number between the : and the . in $DISPLAY added to 6000.
To get the TCP port number, you can run:
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
Then (if using ufw as firewall), open up this port for the Docker containers in the 172.17.0.0 subnet:
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
All the commands together can be put into a script:
XSOCK=/tmp/.X11-unix
XAUTH=/tmp/.docker.xauth
xauth nlist $DISPLAY | sed -e 's/^..../ffff/' | sudo xauth -f $XAUTH nmerge -
sudo chmod 777 $XAUTH
X11PORT=`echo $DISPLAY | sed 's/^[^:]*:\([^\.]\+\).*/\1/'`
TCPPORT=`expr 6000 + $X11PORT`
sudo ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
DISPLAY=`echo $DISPLAY | sed 's/^[^:]*\(.*\)/172.17.0.1\1/'`
sudo docker run -ti --rm -e DISPLAY=$DISPLAY -v $XAUTH:$XAUTH \
-e XAUTHORITY=$XAUTH name_of_docker_image
Assuming you are not root and therefore need to use sudo.
Instead of sudo chmod 777 $XAUTH
, you could run:
sudo chown my_docker_container_user $XAUTH
sudo chmod 600 $XAUTH
to prevent other users on the server from also being able to access the X server if they know what you have created the /tmp/.docker.auth file for.
I hope this should make it properly work for most scenarios.
If you set X11UseLocalhost = no
, you're allowing even external traffic to reach the X11 socket. That is, traffic directed to an external IP of the machine can reach the SSHD X11 forwarding. There are still two security mechanism which might apply (firewall, X11 auth). Still, I'd prefer leaving a system global setting alone if you're fiddling with a user- or even application-specific issue like in this case.
Here's an alternative to changing X11UseLocalhost
in the sshd config:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------- veth123@if5 --|-- eth0@if6 |
| (bridge) (veth pair) | (veth pair) |
| | |
| 127.0.0.1 +-------------------------+
routing +- lo
| (loopback)
|
| 192.168.1.2
+- ens33
(physical host interface)
With the default X11UseLocalhost yes
, sshd listens on 127.0.0.1
on the root network namespace. We need to get the X11 traffic from inside the docker network namespace to the loopback interface in the root net ns. The veth pair is connected to the docker0
bridge and both ends can therefore talk to 172.17.0.1 without any routing. The three interfaces in the root net ns (docker0
, lo
and ens33
) can communicate via routing.
We want to achieve the following:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo >------- sshd -+
(loopback) |
v
192.168.1.2 |
ens33 ------<-----+
(physical host interface)
We can let the X11 application talk directly to 172.17.0.1
to "escape" the docker net ns. This is achieved by setting the DISPLAY
appropriately: export DISPLAY=172.17.0.1:10
:
+ docker container net ns+
| |
172.17.0.1 | 172.17.0.2 |
docker0 --------- veth123@if5 --|-- eth0@if6 -----< xeyes |
(bridge) (veth pair) | (veth pair) |
| |
127.0.0.1 +-------------------------+
lo
(loopback)
192.168.1.2
ens33
(physical host interface)
Now, we add an iptables rule to route from 172.17.0.1 to 127.0.0.1 in the root net ns:
iptables \
--table nat \
--insert PREROUTING \
--proto tcp \
--destination 172.17.0.1 \
--dport 6010 \
--jump DNAT \
--to-destination 127.0.0.1:6010
sysctl net.ipv4.conf.docker0.route_localnet=1
Maybe you can improve on this by only routing traffic from this container (veth end). Also, I'm not quite sure why the route_localnet
is needed, to be honest. It appears that 127/8
is a strange source / destination for packets and therefore disabled for routing by default. You can probably also reroute traffic from the loopback interface inside the docker net ns to the veth pair, and from there to the loopback interface in the root net ns.
With the commands given above, we end up with:
+ docker container net ns +
| |
172.17.0.1 | 172.17.0.2 |
+- docker0 --------< veth123@if5 --|-< eth0@if6 -----< xeyes |
| (bridge) (veth pair) | (veth pair) |
v | |
| 127.0.0.1 +-------------------------+
routing +- lo
(loopback)
192.168.1.2
ens33
(physical host interface)
However, now we're trying to access the X11 server as 172.17.0.1:10
. This won't find the entry in the x authority file (~/.Xauthority
), which is usually something like <hostname>:10
. Use Ruben's suggestion to add a new entry visible inside the docker container:
xauth add 172.17.0.1:10 . <cookie>
where <cookie>
is the cookie set up by the SSH X11 forwarding, e.g. via xauth list
.
You might also have to allow traffic ingress to 172.17.0.1:6010
in your firewall.
You can also start an application from the host inside the docker container network namespace:
sudo nsenter --target=<pid of process in container> --net su - $USER <app>
Without the su
, you'll be running as root. Of course, you can also use another container and share the network namespace:
sudo docker run --network=container:<other container name/id> ...
The X11 forwarding mechanism shown above applies to the entire network namespace (actually, to everything connected to the docker0
bridge). Therefore, it will work for any applications inside the container network namespace.