Docker Networking - nginx: [emerg] host not found in upstream Docker Networking - nginx: [emerg] host not found in upstream docker docker

Docker Networking - nginx: [emerg] host not found in upstream


This can be solved with the mentioned depends_on directive since it's implemented now (2016):

version: '2'  services:    nginx:      image: nginx      ports:        - "42080:80"      volumes:        - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro      depends_on:        - php    php:      build: config/docker/php      ports:        - "42022:22"      volumes:        - .:/var/www/html      env_file: config/docker/php/.env.development      depends_on:        - mongo    mongo:      image: mongo      ports:        - "42017:27017"      volumes:        - /var/mongodata/wa-api:/data/db      command: --smallfiles

Successfully tested with:

$ docker-compose versiondocker-compose version 1.8.0, build f3628c7

Find more details in the documentation.

There is also a very interesting article dedicated to this topic: Controlling startup order in Compose


There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:

nginx:  image: nginx  ports:    - "42080:80"  volumes:    - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro  volumes_from:    - phpphp:  build: config/docker/php  ports:    - "42022:22"  volumes:    - .:/var/www/html  env_file: config/docker/php/.env.developmentmongo:  image: mongo  ports:    - "42017:27017"  volumes:    - /var/mongodata/wa-api:/data/db  command: --smallfiles

One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.

depends_on featureThis probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)

There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same @ https://github.com/docker/compose/issues/374 Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:

  1. make nginx retry until the php server is up - I would prefer this one
  2. use volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers.


You can set the max_fails and fail_timeout directives of nginx to indicate that the nginx should retry the x number of connection requests to the container before failing on the upstream server unavailability.

You can tune these two numbers as per your infrastructure and speed at which the whole setup is coming up. You can read more details about the health checks section of the below URL:http://nginx.org/en/docs/http/load_balancing.html

Following is the excerpt from http://nginx.org/en/docs/http/ngx_http_upstream_module.html#servermax_fails=number

sets the number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable for a duration also set by the fail_timeout parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts. What is considered an unsuccessful attempt is defined by the proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream, scgi_next_upstream, and memcached_next_upstream directives.

fail_timeout=time

sets the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable; and the period of time the server will be considered unavailable. By default, the parameter is set to 10 seconds.

To be precise your modified nginx config file should be as follows (this script is assuming that all the containers are up by 25 seconds at least, if not, please change the fail_timeout or max_fails in below upstream section):Note: I didn't test the script myself, so you could give it a try!

upstream phpupstream {   server waapi_php_1:9000 fail_timeout=5s max_fails=5;}server {    listen  80;    root /var/www/test;    error_log /dev/stdout debug;    access_log /dev/stdout;    location / {        # try to serve file directly, fallback to app.php        try_files $uri /index.php$is_args$args;    }    location ~ ^/.+\.php(/|$) {        # Referencing the php service host (Docker)        fastcgi_pass phpupstream;        fastcgi_split_path_info ^(.+\.php)(/.*)$;        include fastcgi_params;        # We must reference the document_root of the external server ourselves here.        fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;        fastcgi_param HTTPS off;    }}

Also, as per the following Note from docker (https://github.com/docker/docker.github.io/blob/master/compose/networking.md#update-containers), it is evident that the retry logic for checking the health of the other containers is not docker's responsibility and rather the containers should do the health check themselves.

Updating containers

If you make a configuration change to a service and run docker-compose up to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working.

If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect.