use nvidia-docker from docker-compose use nvidia-docker from docker-compose docker docker

use nvidia-docker from docker-compose


UPDATE : please check nvidia-docker 2 and its support of docker-compose first https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#do-you-support-docker-compose

(I'd first suggest adding the nvidia-docker tag).

If you look at the nvidia-docker-compose code here it only generates a specific docker-file for docker-compose after a query of the nvidia configuration on localhost:3476.

You can also make by hand this docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):

services:  exampleservice0:    devices:    - /dev/nvidia0    - /dev/nvidia1    - /dev/nvidiactl    - /dev/nvidia-uvm    - /dev/nvidia-uvm-tools    environment:    - EXAMPLE_ENV_VARIABLE=example    image: company/image    volumes:    - ./disk:/disk    - nvidia_driver_375.66:/usr/local/nvidia:roversion: '2'volumes:  media: null  nvidia_driver_375.66:    external: true

Then just run this hand-made docker-compose file with a classic docker-compose command.

Maybe you can then compose with non nvidia dockers by skipping the nvidia specific stuff in the other services.


Additionally to the accepted answer, here's my approach, a bit shorter.I needed to use the old version of docker-compose (2.3) because of the required runtime: nvidia (won't necessarily work with version: 3 - see this). Setting NVIDIA_VISIBLE_DEVICES=all will make all the GPUs visible.

version: '2.3'services:    your-service-name:      runtime: nvidia      environment:        - NVIDIA_VISIBLE_DEVICES=all      # ...your stuff

My example is available here.

Tested on NVIDIA Docker 2.5.0, Docker CE 19.03.13 and NVIDIA-SMI 418.152.00 and CUDA 10.1 on Debian 10.