Django Channels Nginx production Django Channels Nginx production django django

Django Channels Nginx production


This question is actually addressed in the latest Django Channels docs:

It is good practice to use a common path prefix like /ws/ to distinguish WebSocket connections from ordinary HTTP connections because it will make deploying Channels to a production environment in certain configurations easier.

In particular for large sites it will be possible to configure a production-grade HTTP server like nginx to route requests based on path to either (1) a production-grade WSGI server like Gunicorn+Django for ordinary HTTP requests or (2) a production-grade ASGI server like Daphne+Channels for WebSocket requests.

Note that for smaller sites you can use a simpler deployment strategy where Daphne serves all requests - HTTP and WebSocket - rather than having a separate WSGI server. In this deployment configuration no common path prefix like is /ws/ is necessary.

In practice, your NGINX configuration would then look something like (shortened to only include relevant bits):

upstream daphne_server {  server unix:/var/www/html/env/run/daphne.sock fail_timeout=0;}upstream gunicorn_server {  server unix:/var/www/html/env/run/gunicorn.sock fail_timeout=0;}server {   listen   80;   server_name _;  location /ws/ {    proxy_pass http://daphne_server;  }  location / {    proxy_pass http://gunicorn_server;  }}

(Above it is assumed that you are binding the Gunicorn and Daphne servers to Unix socket files.)


I have created an example how to mix Django Channels and Django Rest Framework. I set nginx routing that:

  • websockets connections are going to daphne server
  • HTTP connections (REST API) are going to gunicorn server

Here is my nginx configuration file:

upstream app {    server wsgiserver:8000;}upstream ws_server {    server asgiserver:9000;}server {    listen 8000 default_server;    listen [::]:8000;    client_max_body_size 20M;    location / {        try_files $uri @proxy_to_app;    }    location /tasks {        try_files $uri @proxy_to_ws;    }    location @proxy_to_ws {        proxy_http_version 1.1;        proxy_set_header Upgrade $http_upgrade;        proxy_set_header Connection "upgrade";        proxy_redirect off;        proxy_pass   http://ws_server;    }    location @proxy_to_app {        proxy_set_header X-Forwarded-Proto https;        proxy_set_header X-Url-Scheme $scheme;        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;        proxy_set_header Host $http_host;        proxy_redirect off;        proxy_pass   http://app;    }}


I recently answered a similiar question, have a look there for an explanation on how django channels work.

Basically, you don't need gunicorn anymore. You have daphne which is the interface server that accepts HTTP/Websockets and you have your workers that run django views. Then obviously you have your channel backend that glues everything together.

To make it work you have to configure CHANNEL_LAYERS in settings.py and also run the interface server:

$ daphne my_project.asgi:channel_layer

and your worker:

$ python manage.py runworker

NB! If you chose redis as the channel backend, pay attention to file sizes you're serving. If you have large static files make sure NGINX serves them or otherwise clients will experience cryptic errors that may occur due to redis running out of memory.