Docker best practices: single process for a container Docker best practices: single process for a container nginx nginx

Docker best practices: single process for a container


Depending on the use case, you can run multiple processes inside a single container, although I won't recommend that.

In some sense it is even simpler to run them in different containers. Keeping containers small, stateless, and around a single job makes it easier to maintain them all. Let me tell you how my workflow with containers is in a similar situation.

So:

  1. I have one container with nginx that is exposed to the outside world (:443, :80). At this level it is straightforward to manage the configurations, tls certificates, load balancer options etc.
  2. One (or more) container(s) with the application. In that case a php-fpm container with the app. Docker image is stateless, the containers mount and share the volumes for static files and so on. At this point, you can at any time to destroy and re-create the application container, keeping the load-balancer up and running. Also, you can have multiple applications behind the same proxy (nginx), and managing one of them would not affect the others.
  3. One or more containers for the database... Same benefits apply.
  4. Redis, Memcache etc.

Having this structure, the deployment is modular, so each and every "service" is separated and logically independent from the rest of the system.

As a side effect, in this particular case, you can do zero-downtime deployments (updates) to the application. The idea behind this is simple. When you have to do an update, you create a docker image with the updated application, run the container, run all the tests and maintenance scripts and if everything goes well, you add the newly created container to the chain (load balancer), and softly kill the old one. That's it, you have the updated application and users didn't even notice it at all.


This means process in the Linux/Unix sense of the word. That said, there's nothing stopping you from running multiple processes in a container, it's just not a recommended paradigm.


We have found that we can run multiple services using Supervisord. It makes the architecture pretty easy, requiring only that you have an additional supervisor.conf file. For instance:

supervisord.conf

[supervisord]nodaemon=true[program:apache2]command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"[program:udpparser]command=bin/bash -c "exec /usr/bin/php -f /home/www-server/services/udp_parser.php"

From Dockerfile:

FROM ubuntu:14.04RUN apt-get updateRUN apt-get install -y apache2 supervisor php5 php5-mysql php5-cliRUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/log/supervisorRUN a2enmod rewriteRUN a2enmod sslCOPY supervisord.conf /etc/supervisor/conf.d/supervisord.confADD 000-default.conf /etc/apache2/sites-enabled/ADD default-ssl.conf /etc/apache2/sites-enabled/ADD apache2.conf /etc/apache2/ADD www-server/ /home/www-server/EXPOSE 80 443 30089CMD ["/usr/bin/supervisord"]

As a best practice we only do this in cases where the services benefit from running together while all other containers are stand-alone micro-services.