Why is docker image eating up my disk space that is not used by docker Why is docker image eating up my disk space that is not used by docker docker docker

Why is docker image eating up my disk space that is not used by docker


Deleting my entire /var/lib/docker is not ok for me. These are a safer ways:

Solution 1:

The following commands from the issue clear up space for me and it's a lot safer than deleting /var/lib/docker or for Windows check your disk image location here.

Before:

docker info

Example output:

Metadata file: Data Space Used: 53.38 GBData Space Total: 53.39 GBData Space Available: 8.389 MBMetadata Space Used: 6.234 MBMetadata Space Total: 54.53 MBMetadata Space Available: 48.29 MB

Command in newer versions of Docker e.g. 17.x +

docker system prune -a

It will show you a warning that it will remove all the stopped containers,networks, images and build cache. Generally it's safe to remove this. (Next time you run a container it may pull from the Docker registry)

Example output:

Total reclaimed space: 1.243GB

You can then run docker info again to see what has been cleaned up

docker info

Solution 2:

Along with this, make sure your programs inside the docker container are not writing many/huge files to the file system.

Check your running docker process's space usage size

docker ps -s #may take minutes to return

or for all containers, even exited

docker ps -as #may take minutes to return

You can then delete the offending container/s

docker rm <CONTAINER ID>

Find the possible culprit which may be using gigs of space

docker exec -it <CONTAINER ID> "/bin/sh"du -h

In my case the program was writing gigs of temp files.

(Nathaniel Waisbrot mentioned in the accepted answer this issue and I got some info from the issue)


OR

Commands in older versions of Docker e.g. 1.13.x (run as root not sudo):

# Delete 'exited' containersdocker rm -v $(docker ps -a -q -f status=exited)# Delete 'dangling' images (If there are no images you will get a docker: "rmi" requires a minimum of 1 argument)docker rmi $(docker images -f "dangling=true" -q)# Delete 'dangling' volumes (If there are no images you will get a docker: "volume rm" requires a minimum of 1 argument)docker volume rm $(docker volume ls -qf dangling=true)

After :

> docker infoMetadata file: Data Space Used: 1.43 GBData Space Total: 53.39 GBData Space Available: 51.96 GBMetadata Space Used: 577.5 kBMetadata Space Total: 54.53 MBMetadata Space Available: 53.95 MB


It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.

The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).

A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.

My solution was to uninstall docker, then delete all its files, then reinstall:

sudo yum remove dockersudo rm -rf /var/lib/dockersudo yum install docker

This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.


Move the /var/lib/docker directory.

Assuming the /data directory has enough room, if not, substitute for one that does,

sudo systemctl stop dockersudo mv /var/lib/docker /datasudo ln -s /data/docker /var/lib/dockersudo systemctl start docker

This way, you don't have to reconfigure docker.