Docker best practices: single process for a container

a 夏天 提交于 2021-01-27 04:37:53

问题


The Docker best practices guide states that:

"...you should only run a single process in a single container..."

Should Nginx and PHP-FPM run in separate containers? Or does that mean that micro service architectures only run one service or "app" in a container?

Having these services in a single container seems easier to deploy and maintain.


回答1:


Depending on the use case, you can run multiple processes inside a single container, although I won't recommend that.

In some sense it is even simpler to run them in different containers. Keeping containers small, stateless, and around a single job makes it easier to maintain them all. Let me tell you how my workflow with containers is in a similar situation.

So:

  1. I have one container with nginx that is exposed to the outside world (:443, :80). At this level it is straightforward to manage the configurations, tls certificates, load balancer options etc.
  2. One (or more) container(s) with the application. In that case a php-fpm container with the app. Docker image is stateless, the containers mount and share the volumes for static files and so on. At this point, you can at any time to destroy and re-create the application container, keeping the load-balancer up and running. Also, you can have multiple applications behind the same proxy (nginx), and managing one of them would not affect the others.
  3. One or more containers for the database... Same benefits apply.
  4. Redis, Memcache etc.

Having this structure, the deployment is modular, so each and every "service" is separated and logically independent from the rest of the system.

As a side effect, in this particular case, you can do zero-downtime deployments (updates) to the application. The idea behind this is simple. When you have to do an update, you create a docker image with the updated application, run the container, run all the tests and maintenance scripts and if everything goes well, you add the newly created container to the chain (load balancer), and softly kill the old one. That's it, you have the updated application and users didn't even notice it at all.




回答2:


This means process in the Linux/Unix sense of the word. That said, there's nothing stopping you from running multiple processes in a container, it's just not a recommended paradigm.




回答3:


We have found that we can run multiple services using Supervisord. It makes the architecture pretty easy, requiring only that you have an additional supervisor.conf file. For instance:

supervisord.conf

[supervisord]
nodaemon=true

[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"

[program:udpparser]
command=bin/bash -c "exec /usr/bin/php -f /home/www-server/services/udp_parser.php"

From Dockerfile:

FROM ubuntu:14.04

RUN apt-get update
RUN apt-get install -y apache2 supervisor php5 php5-mysql php5-cli

RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/log/supervisor

RUN a2enmod rewrite
RUN a2enmod ssl

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

ADD 000-default.conf /etc/apache2/sites-enabled/
ADD default-ssl.conf /etc/apache2/sites-enabled/
ADD apache2.conf /etc/apache2/
ADD www-server/ /home/www-server/

EXPOSE 80 443 30089

CMD ["/usr/bin/supervisord"]

As a best practice we only do this in cases where the services benefit from running together while all other containers are stand-alone micro-services.



来源:https://stackoverflow.com/questions/33999865/docker-best-practices-single-process-for-a-container

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!