Is GPU pass-through possible with docker for Windows? Is GPU pass-through possible with docker for Windows? docker docker

Is GPU pass-through possible with docker for Windows?


Update (December 2020) You can now do GPU pass-through on Windows, if you use WSL 2 as the backend for Docker: WSL 2 GPU Support is Here - that is a slightly neater method than running Docker inside WSL.

Original answer:

GPU access from within a Docker container currently isn't supported on Windows.

You need nvidia-docker, but that is currently only supported on Linux platforms. GPU passthrough with Hyper-v would require Discrete Device Assignment (DDA), which is currently only in Windows Server, and (at least in 2015) there was no plan to change that state of affairs. Hence, NVIDIA are not porting nvidia-docker to Windows at the moment.

A bit more info here:https://devblogs.nvidia.com/nvidia-docker-gpu-server-application-deployment-made-easy/

Update (October 2019): nvidia-docker is deprecated, as Docker 19.03 has native support for NVIDIA GPUs. Instead install nvidia-container-runtime, and use the docker run --gpus all flag. You can also run Windows Containers with GPU acceleration on a Windows host, using Docker 19.03, but not a Linux container.

Update (August 2020): It looks like you can now do GPU pass-through when running Docker inside the Windows Subsystem for Linux (WSL 2).

This link goes through installation, setup and running a TensorFlow Jupyter notebook inside Docker in Ubuntu in WSL 2, with GPU support:https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2

Note - I haven't done this myself yet.


Now that docker on Windows 10 can access WSL2 (as of Windows 10 version 2004) it has cleared the way for GPU support of Linux docker containers on Windows 10.

According to this official blog, MS "will start previewing GPU compute support for WSL in Windows 10 Insider builds within the next few months":https://devblogs.microsoft.com/commandline/the-windows-subsystem-for-linux-build-2020-summary/#wsl-gpu

I expect Docker GPU support to follow shortly after.

Update:

GPU pass-through in Windows is now possible under very specific circumstances, including:

  • that the container must be Windows as well
  • process level isolation only, no hyper-v
  • it only works with DirectX based applications
  • for Machine Learning, this means only Microsoft ML will work.

Refer to: https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/gpu-acceleration

Update 2:

GPU pass through from Linux docker on Windows host is now possible on the latest Windows Insider build, refer to:

https://ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2

This will flow through to mainstream Windows probably in the next major update.

Update 3:

Confirmed that Windows build version 2021 will include GPU pass-through for WSL. See details of the announcement here:https://blogs.windows.com/windowsdeveloper/2021/05/25/the-windows-developers-guide-to-microsoft-build-2021/


2021 updated answer

If you need to access NVIDIA CUDA from a Linux container on Windows 10, there is an easy way to do so, if you are fine with the (current) requirement of being on an Insider build. I was successful with training models on GPU in TensorFlow 2 using this method.

  1. Update Windows 10 to build 20149 or higher. At the time of writing, only Insider Dev branch will work -- you can check the build numbers on the Windows Insider webpage*.
  2. Install the NVIDIA CUDA WSL driver (free registration is required)
  3. Install Docker Desktop
    • It will guide you through enabling WSL2 if you haven't already.
    • If you already have it installed, update it to the latest version and enable Settings - General - Use the WSL2 backed engine.
    • To be able to use the docker CLI from inside WSL2 (not just from PowerShell/cmd), enable the integration in Settings - Resources - WSL INTEGRATION.
  4. Test using the command docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody

You need to pass --gpus=all to docker run to enable the container to access GPU. (If you use VSCode Remote Containers, add "runArgs": ["--gpus=all"], to devcontainer.json.)

You may come across mentions of --runtime=nvidia in descriptions of images meant for nvidia-docker (like the official TensorFlow images). Simply replace --runtime=nvidia by --gpus=all in the provided commands.

* Update: The Insider Dev channel has now moved to Windows 11. It is unclear whether this feature will hit stable Windows 10, or remain exclusive to Windows 11.