Docker named volumes between multiple containers Docker named volumes between multiple containers docker docker

Docker named volumes between multiple containers


Probably you could have, on the nginx container, the shared volume pointing to /etc/nginx/conf.d, and then use different names for each project conf file.

Below a proof-of-concept, three servers with a config file to be attached on each one, and a proxy (your Nginx) with the shared volume bound to /config:

version: '3'services:  server1:    image: busybox:1.31.1    volumes:    - deleteme_after_demo:/config    - ./server1.conf:/app/server1.conf    command: sh -c "cp /app/server1.conf /config; tail -f /dev/null"  server2:    image: busybox:1.31.1    volumes:    - deleteme_after_demo:/config    - ./server2.conf:/app/server2.conf    command: sh -c "cp /app/server2.conf /config; tail -f /dev/null"  server3:    image: busybox:1.31.1    volumes:    - deleteme_after_demo:/config    - ./server3.conf:/app/server3.conf    command: sh -c "cp /app/server3.conf /config; tail -f /dev/null"  proxy1:    image: busybox:1.31.1    volumes:    - deleteme_after_demo:/config:ro    command: tail -f /dev/nullvolumes:  deleteme_after_demo:

Let's create 3 config files to be included:

➜ echo "server 1" > server1.conf➜ echo "server 2" > server2.conf➜ echo "server 3" > server3.conf

then:

➜ docker-compose up -d                  Creating network "deleteme_default" with the default driverCreating deleteme_server2_1 ... doneCreating deleteme_server3_1 ... doneCreating deleteme_server1_1 ... doneCreating deleteme_proxy1_1  ... done

And finally, let's verify the config files are accessible from proxy container:

➜ docker-compose exec proxy1 sh -c "cat /config/server1.conf"server 1➜ docker-compose exec proxy1 sh -c "cat /config/server2.conf"server 2➜ docker-compose exec proxy1 sh -c "cat /config/server3.conf"server 3

I hope it helps.Cheers!

Note: you should see mounting a volume exactly the same way as using Unix mount command. If you already have content inside the mount point, after mount you are not going to see it, but the content of the mounted device (unless it was empty and first created here). Whatever you want to see there needs to be already on the device or you need to move it afterward.

So, I did it by mounting the files because I had no data in the container I used. Then copying these with the starting command. You could address it a different way, eg copying the config file to the mounted volume by the use of an entry point script in your image.


A named volume is initialized when it's empty/new and a container is started using that volume. The initialization is from the image filesystem, and after that, the named volume is persistent and will retain the state from the previous use.

In this case, what you have is a race condition. The volume is sharing the files, but it depends on which container compose starts up first to control which image is used to initialize the volume. The named volume is shared between multiple images, it's just the content that you want to be different.

For your use case, you may be better off putting some logic in the image build and entrypoint to save the files you want to mirror in the volume to a different location in the image on build, and then update the volume on container startup. By moving this out of the named volume initialization steps, you avoid the race condition, and allow the volume to be updated with future changes from the image. An example of this is in my base image with the save-volume you'd run in the Dockerfile, and load-volume you'd run in your entrypoint.

As a side note, it's also a good practice to mount that named volume as read-only in the containers that have no need to write to the config files.