NGINX and multiple docker-compose

≡放荡痞女 提交于 2020-04-08 09:36:07

问题


If I want to setup NGINX with my docker containers, one option is to setup the NGINX instance in my docker-compose.yml, and link the NGINX container to all application containers.

The drawback of this approach, however, is that the docker-compose.yml becomes server-level, since only one NGINX container can expose port 80/443 to the internet.

What I am interested in, is to be able to define several docker-compose.yml on the same server, but still easily expose the public-facing containers in each compose file via a single server-specific NGINX container.

I feel this should be pretty easy, but I haven't been able to find a good resource or example for this.


回答1:


First, you need to create network to for Nginx and proxied containers:

docker network create nginx_network

Next, run nginx container with compose file like this:

services:
  nginx:
    image: your_nginx_image
    ports:
      - "80:80"
      - "443:443"
    networks:
      - nginx_network
networks:
  nginx_network:
    external: true

After that you can run proxied containers:

services:
  webapp1:
    image: ...
    container_name: mywebapp1
    networks:
      - nginx_network      #proxy and app must be in same network
      - webapp1_db_network #you can use additional networks for some stuff
  database:
    image: ...
    networks:
      - webapp1_db_network
networks:
  nginx_network:
    external: true
  webapp1_db_network: ~ #this network won't be accessible from outside

Also, to make this work you need to configure your nginx properly:

server {
    listen 80;
    server_name your_app.example.com;

    #Docker DNS
    resolver 127.0.0.11;

    location / {
        #hack to prevent nginx to resolve container's host on start up
        set $docker_host "mywebapp1";
        proxy_pass http://$docker_host:8080;
    }
}

You need to tell nginx to use Docker's DNS, so it will be able to access containers by their names.

But note, that if you run nginx container before others, then nginx will try to resolve another containers' host and fail, because other containers are not running yet. You can use a hack with placing host into variable. With this hack nginx won't try to resolve host until receiving a request.

With this combination you can have nginx always up, while starting and stopping proxied applications independently.

UPD: If you want more dynamic solution, you can modify Nginx config the following way:

server {
    listen 80;
    resolver 127.0.0.11;

    #define server_name with regexp which will read subdomain into variable
    server_name ~^(?<webapp>.+)\.example\.com;

    location / {
        #use variable from regexp to pass request to desired container
        proxy_pass http://$webapp:8080;
    }
}

With such configuration request to webapp1.example.com will be passed to container "webapp1", webapp2.example.com to "webapp2" etc. All you need is just add DNS records and run app containers with right name.



来源:https://stackoverflow.com/questions/48076605/nginx-and-multiple-docker-compose

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!