How to mount host volumes into docker containers in Dockerfile during build How to mount host volumes into docker containers in Dockerfile during build docker docker

How to mount host volumes into docker containers in Dockerfile during build


It is not possible to use the VOLUME instruction to tell docker what to mount. That would seriously break portability. This instruction tells docker that content in those directories does not go in images and can be accessed from other containers using the --volumes-from command line parameter. You have to run the container using -v /path/on/host:/path/in/container to access directories from the host.

Mounting host volumes during build is not possible. There is no privileged build and mounting the host would also seriously degrade portability. You might want to try using wget or curl to download whatever you need for the build and put it in place.


UPDATE: Somebody just won't take no as the answer, and I like it, very much, especially to this particular question.

GOOD NEWS, There is a way now --

The solution is Rocker: https://github.com/grammarly/rocker

John Yani said, "IMO, it solves all the weak points of Dockerfile, making it suitable for development."

Rocker

https://github.com/grammarly/rocker

By introducing new commands, Rocker aims to solve the following use cases, which are painful with plain Docker:

  1. Mount reusable volumes on build stage, so dependency management tools may use cache between builds.
  2. Share ssh keys with build (for pulling private repos, etc.), while not leaving them in the resulting image.
  3. Build and run application in different images, be able to easily pass an artifact from one image to another, ideally have this logic in a single Dockerfile.
  4. Tag/Push images right from Dockerfiles.
  5. Pass variables from shell build command so they can be substituted to a Dockerfile.

And more. These are the most critical issues that were blocking our adoption of Docker at Grammarly.

Update: Rocker has been discontinued, per the official project repo on Github

As of early 2018, the container ecosystem is much more mature than it was three years ago when this project was initiated. Now, some of the critical and outstanding features of rocker can be easily covered by docker build or other well-supported tools, though some features do remain unique to rocker. See https://github.com/grammarly/rocker/issues/199 for more details.


First, to answer "why doesn't VOLUME work?" When you define a VOLUME in the Dockerfile, you can only define the target, not the source of the volume. During the build, you will only get an anonymous volume from this. That anonymous volume will be mounted at every RUN command, prepopulated with the contents of the image, and then discarded at the end of the RUN command. Only changes to the container are saved, not changes to the volume.


Since this question has been asked, a few features have been released that may help. First is multistage builds allowing you to build a disk space inefficient first stage, and copy just the needed output to the final stage that you ship. And the second feature is Buildkit which is dramatically changing how images are built and new capabilities are being added to the build.

For a multi-stage build, you would have multiple FROM lines, each one starting the creation of a separate image. Only the last image is tagged by default, but you can copy files from previous stages. The standard use is to have a compiler environment to build a binary or other application artifact, and a runtime environment as the second stage that copies over that artifact. You could have:

FROM debian:sid as builderCOPY export /exportRUN compile command here >/result.binFROM debian:sidCOPY --from=builder /result.bin /result.binCMD ["/result.bin"]

That would result in a build that only contains the resulting binary, and not the full /export directory.


Buildkit is coming out of experimental in 18.09. It's a complete redesign of the build process, including the ability to change the frontend parser. One of those parser changes has has implemented the RUN --mount option which lets you mount a cache directory for your run commands. E.g. here's one that mounts some of the debian directories (with a reconfigure of the debian image, this could speed up reinstalls of packages):

# syntax = docker/dockerfile:experimentalFROM debian:latestRUN --mount=target=/var/lib/apt/lists,type=cache \    --mount=target=/var/cache/apt,type=cache \    apt-get update \ && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \      git

You would adjust the cache directory for whatever application cache you have, e.g. $HOME/.m2 for maven, or /root/.cache for golang.


TL;DR: Answer is here: With that RUN --mount syntax, you can also bind mount read-only directories from the build-context. The folder must exist in the build context, and it is not mapped back to the host or the build client:

# syntax = docker/dockerfile:experimentalFROM debian:latestRUN --mount=target=/export,type=bind,source=export \    process export directory here...

Note that because the directory is mounted from the context, it's also mounted read-only, and you cannot push changes back to the host or client. When you build, you'll want an 18.09 or newer install and enable buildkit with export DOCKER_BUILDKIT=1.

If you get an error that the mount flag isn't supported, that indicates that you either didn't enable buildkit with the above variable, or that you didn't enable the experimental syntax with the syntax line at the top of the Dockerfile before any other lines, including comments. Note that the variable to toggle buildkit will only work if your docker install has buildkit support built in, which requires version 18.09 or newer from Docker, both on the client and server.