What have to be done to deliver on Docker and avoid to accumulate images? What have to be done to deliver on Docker and avoid to accumulate images? docker docker

What have to be done to deliver on Docker and avoid to accumulate images?


The local workflow that works for me is:

  1. Do core development locally, without Docker. Things like interactive debuggers and live reloading work just fine in a non-Docker environment without weird hacks or root access, and installing the tools I need usually involves a single brew or apt-get step. Make all of my pytest/junit/rspec/jest/... tests pass.

  2. docker build a new image.

  3. docker stop && docker rm the old container.

  4. docker run a new container.

  5. When the number of old images starts to bother me, docker system prune.

If you're using Docker Compose, you might be able to replace the middle set of steps with docker-compose up --build.

In a production environment, the sequence is slightly different:

  1. When your CI system sees a new commit, after running the repository's local tests, it docker build && docker push a new image. The image has a unique tag, which could be a timestamp or source control commit ID or version tag.

  2. Your deployment system (could be the CI system or a separate CD system) tells whatever cluster manager you're using (Kubernetes, a Compose file with Docker Swarm, Nomad, an Ansible playbook, ...) about the new version tag. The deployment system takes care of stopping, starting, and removing containers.

  3. If your cluster manager doesn't handle this already, run a cron job to docker system prune.


You should use:

docker system df

to investigate the space used by docker.

After that you can use

docker system prune -a --volumes

to remove unused components. Containers you should stop them yourself before doing this, but this way you are sure to cover everything.