Run Grunt / Gulp inside Docker container or outside? Run Grunt / Gulp inside Docker container or outside? docker docker

Run Grunt / Gulp inside Docker container or outside?


I'd like to suggest a third approach that I have done for a static generated site, the separate build image.

In this approach, your main Dockerfile (the one in project root) becomes a build and development image, basically doing everything in the second approach. However, you override the CMD at run time, which is to tar up the built dist folder into a dist.tar or similar.

Then, you have another folder (something like image) that has a Dockerfile. The role of this image is only to serve up the dist.tar contents. So we do a docker cp <container_id_from_tar_run> /dist. Then the Dockerfile just installs our web server and has a ADD dist.tar /var/www.

The abstract is something like:

  • Build the builder Docker image (which gets you a working environment without webserver). At thist point, the application is built. We could run the container in development with grunt serve or whatever the command is to start our built in development server.
  • Instead of running the server, we override the default command to tar up our dist folder. Something like tar -cf /dist.tar /myapp/dist.
  • We now have a temporary container with a /dist.tar artifact. Copy it to your actual deployment Docker folder we called image using docker cp <container_id_from_tar_run> /dist.tar ./image/.
  • Now, we can build the small Docker image without all our development dependencies with docker build ./image.

I like this approach because it is still all Docker. All the commands in this approach are Docker commands and you can really slim down the actual image you end up deploying.

If you want to check out an image with this approach in action, check out https://github.com/gliderlabs/docker-alpine which uses a builder image (in the builder folder) to build tar.gz files that then get copied to their respective Dockerfile folder.


The only difference I see is that you can reproduce a full grunt installation in the second approach.

With the first one, you depend on a local action which might be done differently, on different environments.

A container should be based in an image that can be reproduced easily instead of depending on an host folder which contains "what is needed" (not knowing how that part has been done)


If the build environment overhead which comes with the installation is too much for a grunt image, you can:

  • create an image "app.tar" dedicated for the installation (I did that for Apache, that I had to recompile, creating a deb package in a shared volume).
    In your case, you can create an archive ('tar') of the app installed.
  • creating a container from a base image, using the volume from that first container

    docker run --it --name=app.inst --volumes-from=app.tar ubuntu untar /shared/path/app.tardocker commit app.inst app

Then end result is an image with the app present on its filesystem.

This is a mix between your approach 1 and 2.


A variation of the solution 1 is to have a "parent -> child" that makes the build of the project really fast. I would have dockerfile like:

FROM nodeRUN mkdir appCOPY dist/package.json app/package.jsonWORKDIR appRUN npm install

This will handle the installation of the node dependencies, and have another dockerfile that will handle the application "installation" like:

FROM image-with-dependencies:v1ENV NODE_ENV=prodEXPOSE 9001COPY dist .ENTRYPOINT ["npm", "start"]

with this you can continue your development and the "build" of the docker image is going to be faster of what it would be if you required to "re-install" the node dependencies. If you install new dependencies on node, just re-build the dependencies image.

I hope this helps someone.

Regards