Dockerizing nginx and Flask Dockerizing nginx and Flask flask flask

Dockerizing nginx and Flask


Short answer:

I would deploy the nginx and uwsgi/flask application as separate containers. This gives you a more flexible architecture that allows you to link more microservices containers to the nginx instance, as your requirements for more services grow.

Explanation:

With docker the usual strategy is to split the nginx service and the uwsgi/flask service into two separate containers. You can then link both of them using links. This is a common architecture philosophy in the docker world. Tools like docker-compose simplify the management of running multiple containers and forming links between them. The following docker-compose configuration file shows an example of this:

version: '2'services:  app:    image: flask_app:latest    volumes:        - /etc/app.cfg:/etc/app.cfg:ro    expose:        - "8090"  http_proxy:    image: "nginx:stable"    expose:        - "80"    ports:        - "80:8090"    volumes:        - /etc/app_nginx/conf.d/:/etc/nginx/conf.d/:ro

This means that if you want to add more application containers, you can attach them to the same ngnix proxy easily by linking them. Furthermore, if you want to upgrade one part of your infrastructure, say upgrade nginx, or shift from apache to nginx, you are only re-building the relevant container, and leaving all the rest in the place.

If you were to add both services to a single container (e.g. by launching supervisord process from the Dockerfile ENTRYPOINT), that would allow you to opt more easily for communication between the nginx and uwsgi process using the socks file, rather than by IP, but I don't think this in it self is a strong enough reason to put both in the same container.

Also, consider if eventually you end up running twenty microservices and each is running each own nginx instance, that means you now have twenty sets of nginx (access.log/error.log) logs to track across twenty containers.

If you are employing a "microservices" architecture, that implies that with time you will be adding more and more containers. In such an ecosystem, running nginx as a separate docker process and linking the microservices to it makes it easier to grow in line with your expanding requirements.

Furthermore, certain tasks only need to be done once. For example, SSL termination can be done at the nginx container, with it configured with the appropriate SSL certificates once, irrespective of how many microservices are being attached to it, if the nginx service is running on its own container.

A note about service discovery

If the containers are running in the same host then linking all the containers is easy. If the containers are running over multiple hosts, using Kubernetes or Docker swarm, then things may become a bit more complicated, as you (or your cluster framework) needs to be able to link your DNS address to your nginx instance, and the docker containers need to be able to 'find' each other - this adds a bit of conceptual overhead. Kubernetes helps you achieve this by grouping containers into pods, defining services, etc.


Docker's philosophy is using microservices in containers. The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services.

This being said you can deploy uwcgi in a separate container, and benefit from microservices architecture.

Some advantages of microservices architecture are:

  • Improved fault isolation
  • Eliminates long-term commitment to a single technology stack
  • Makes it easier for a new developer to understand the functionality of a service
  • Easier upgrade management


If you're using Nginx in front of your Flask/uwsgi server, you're using Nginx as a proxy: it takes care of forwarding traffic to the server, eventually taking care of TLS encryption, maybe authentication etc...

The point of using a proxy like Nginx is to be able to load-balance the traffic to the server(s): the Nginx proxy receives the requests, and distributes the load among multiple servers.

This means you need one instance of Nginx, and one or multiple instances of the Flask/usqgi server as 'upstream' servers.

To achieve this the only way is to use separate containers.

Note that if you're on a cloud provider like AWS or GKE, which provides the load-balancer to bring your external traffic to your Kubernetes cluster, and if you are merely using Nginx to forward the traffic (i.e. not using it for TLS or auth) then you probably don't even need the Nginx proxy but can use a Service which does the proxying for you. Adding Nginx just gives you more control.