How to use docker in the development phase of a devops life cycle? How to use docker in the development phase of a devops life cycle? docker docker

How to use docker in the development phase of a devops life cycle?


Disclaimer: this is my own opinion on the subject as asked by Mr. Mars. Even though I did my best to back my answer with actual sources, it's mostly based on my own experience and a bit of common sense

Which of these scenarios is the most typical when using Docker for development?

I have seen all 3 scenarios iin several projects, each of them with their advantages and drawbacks. However I think scenario 3 with a Docker Compose allowing for dynamic code reload is the most advantageous in term of flexibility and consistency:

  • Dev and Prod Docker Compose are close matches, meaning Dev environment is as close as possible to Prod environment
  • You do not have to rebuild the image constantly when developping, but it's easy to do when you need to
  • Lots of technologies support such scenario, such as Spring Dev Tools as you mentionned, but also Python Flask, etc.
  • You can easily leverage Docker Compose extends a.k.a configuration sharing mechanism (also possible with scenario 2)

Is scenario 1 well raised? That is, dockerize only external services, such as databases, queues, etc. and perform the development and debugging of the application with the IDE without using Docker for it.

Scenario 1 is quite common, but the IDE environment would probably be different than the one from the Docker container (and it would be difficult to maintain a match of version for each libs, dependencies, etc. from IDE environment to Docker environment). It would also probably require to go through an intermediate step between Dev and Production to actually test the Docker image built after Dev is working before going to Production.

In my own experience doing this is great when you do not want to deal too much with Docker when actually doing dev and/or the language or technology you use is not adapted for dynamic reload as described in scenario 3. But in the end it only adds a drift between your environments and more complexity between Dev and Prod deployment method.

having to rebuild the image and start the containers again is a significant waste of time. In short, a question would be: How to avoid this?

Beside the scenarios you describe, you have ways to decently (even drastically) reduce image build time by leveraging Docker build cache and designing your Dockerfile. For example, a Python application would typically copy code as the last (or almost last) step of the build to avoid invalidating the cache, and for Java app it would be possible to split code so as to avoid compiling the entire application everytime a bit of code changes - that would depend on your actual setup.


I personally use a workflow roughly matching scenario 3 such as:

  • a docker-compose.yml file corresponding to my Production environment
  • a docker-compose.dev.yml which will override some aspect of my main Docker Compose file such as mouting code from my machine, adding dev specific flags to commands, etc. - it would be run such as
    docker-compose -f docker-compose.yml -f docker-compose.dev.yml 
    but it would also be possible to have a docker-compose.override.yml as Docker Compose uses by default for override
  • in some situation I would have to use other overrides for specific situations such as docker-compose.ci.yml on my CI, but usually the main Docker Compose file is enough to describe my Prod environment (and if that's not the case, docker-compose.prod.yml does the trick)


I'm using something similar to your 3rd scenario for my web dev, but it is Node-based. So I have 3 docker-compose files (actually 4, one is base and having all common stuff for others) for dev, staging and production environments.

Staging docker-compose config is similar to production config excluding SSL, ports and other things that may not allow to use it locally.

I have a separate container for each service (like DB, queue), and for dev, I also have additional dev DB and queue containers mostly for running auto-tests. In dev environment, all source are mounted into containers, so it allows to use IDE/editor of choice outside the container, and see changes inside.

I use supervisor to manage my workers inside a container with workers and have some commands to restart my workers manually when I need this. Maybe you can have something similar to recompile/restart your Java app. Or if you have an idea of how to organize app source code changes detection and your app auto-reloading, than could be the best variant. By the way, you gave me an idea to research something similar suitable for my case.

For staging and production environment, my source code is included inside the corresponding container using production Dockerfile. And I have some commands to restart all stuff using an environment I need, and this typically includes rebuilding containers, but because of Docker cache, it doesn't take much time (about 20 seconds). And taking into account that switching between environments is not a too frequent operation I feel quite comfortable with this.

Production docker-compose config is used only during deployment because it enables SSL, proper ports and has some additional production stuff.

Update for details on backend app restarting using Supervisor:

This is how I use it in my projects:

A part of my Dockerfile with installing Supervisor:

FROM node:10.15.2-stretch-slimRUN apt-get update && apt-get install -y \  # Supervisor    supervisor \    ......# Configs for services/workers managed by supervisorCOPY some/path/worker-configs/*.conf /etc/supervisor/conf.d/

This is an example of one of Supervisor configs for a worker:

[program:myWorkerName]command=/usr/local/bin/node /app/workers/my-worker.jsuser=rootnumprocs=1stopsignal=INTautostart=trueautorestart=truestartretries=10

In this example in your case command should run your Java app.

And this is an example of command aliases for convenient managing Supervisor from outside of containers. I'm using Makefile as a universal runner of all commands, but this can be something else.

# Used to run all workerssu-start:    @docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl start all# Used to stop all workerssu-stop:    @docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl stop all# Used to restart all workerssu-restart:    @docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl restart all# Used to check status of all workerssu-status:    @docker exec -t MY-WORKERS-CONTAINER-NAME supervisorctl status

As I described above these Supervisor commands need to be run manually, but I think it is possible to implement maybe another Node-based worker or some watcher outside of a container with workers that will detect file system changes for sources directory and run these commands automatically. I think it is possible to implement something like this using Java as well like this or this.

On the other hand, it needs to be done carefully to avoid constant restarting workers on every little change.


I've seen them all used in different scenarios. There are some gotchas to avoid:

  1. Applications inside of a container shouldn't depend on something running outside of a container on the host. So all of your dependencies should be containerized first.

  2. File permissions with host volumes can be complicated depending on your version of docker. Some of the newer Docker Desktop installs automatically handle uid mappings, but if you develop directly on Linux you'll need to ensure the containers run as the same uid as your host user.

  3. Avoid making changing inside the container if that isn't mapped into a host volume, since those changes will be lost when the container is recreated.

Looking at each of the options, here's my assessment of each:

  1. Containerizing just the DB: This works well when developers already have a development environment for the language of choice, and there's no risk of external dependencies creeping in, e.g. a developer upgrading their JDK install to a newer version than the image is built with. It follows the idea of containerizing the dependencies first, while also giving developers the familiar IDE integration with their application.

  2. Rebuilding the Image for Every Change: This tends to be the least ideal for developer workflow, but the quickest to implement when you're not familiar with the tooling. I'll give a 4th option that I consider an improvement to this.

  3. Everything in a container, volume mounts, and live reloading: This is the most complicated to implement, and requires the language itself to support things like live reloading. However, when they do, it is nearly seamless for the developers and gets them up to speed on a new project quickly without needing to install any other tooling to get started.

  4. Rebuild the app in the container with volume mounts: This is a halfway point between 2 and 3. When you don't have live reloading, you likely need to recompile or restart the interpreter to see any change. Rather than rebuilding the image, I put the recompile step in the entrypoint of a development image. I'll mount the code into the container, and run a full JDK instead of just a JRE (or whatever compiler is needed). I use named volumes for any dependency caches so they don't need to download on every restart. Then the method to see the changes is to restart that one container. The steps are identical to a compiled binary outside of a container, stop the old service, recompile, and restart the service, but now it happens inside of a container that should have the same tools used when building the production image.

For option 4, I tend to use a multi-stage build that has stages for build, develop, and release. The build stage pulls in the code and compiles it, the develop stage is the same base image as build but with an entrypoint that does the compile/run, and the release stage copies the result of the build stage into a minimal runtime. Developers then have a compose file for development that creates the development image and runs that with volume mounts and any debugging ports opened.