How best to use Docker in continuous delivery? How best to use Docker in continuous delivery? docker docker

How best to use Docker in continuous delivery?


Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:

  • Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
  • The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
  • All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
  • The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.

With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.


Yes, you should shift from jar/war files to images as your build artefacts. The process @wassgren describes should work well, however I found the following to fit our use case better, especially for development:

1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.

2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.

3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.


simple steps to follow.

1)Install jenkins in a container

2)Install framework tool in the same container.(I used SBT).

3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).

4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).

5)Now we create a base environment image(in our case mysql installed).

6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.

7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.

8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.

9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.

10)Before making this container up you need to stop the previously running container.

Important notes:

An image is created everytime you build and is stored in registry.Thus this can be reused.This image can be pushed into main production server and can be triggered.This helps to maintain a clean environment everytime because we are using the base image.