Using GPU inside docker container - CUDA Version: N/A and torch.cuda.is_available returns False
docker run --rm --gpus all nvidia/cuda nvidia-smi
should NOT return CUDA Version: N/A
if everything (aka nvidia driver, CUDA toolkit, and nvidia-container-toolkit) is installed correctly on the host machine.
Given that docker run --rm --gpus all nvidia/cuda nvidia-smi
returns correctly. I also had problem with CUDA Version: N/A
inside of the container, which I had luck in solving:
Please see my answer https://stackoverflow.com/a/64422438/2202107 (obviously you need to adjust and install the matching/correct versions of everything)
when I use " --gpus=all,capabilities=utility " return False,when use " --gpus=all " return True ...
For anybody arriving here looking how to do it with docker compose, add to your service:
deploy: resources: reservations: devices: - driver: nvidia capabilities: - gpu - utility # nvidia-smi - compute # CUDA. Required to avoid "CUDA version: N/A" - video # NVENC. For instance to use a hardware accelerated ffmpeg. Skip it if you don't need it