Docker push takes a really long time Docker push takes a really long time docker docker

Docker push takes a really long time


Just a note: I run my own docker registry which is local to the machine I am issuing the "docker push" command on and it still takes an inordinate amount of time. It is definitely not an I/O rate issue from the disks as they are backed by SSDs (and to clarify, they are performant with ~500+MB/sec from anything else that uses them). However, the docker push command seems to take just as long as if I were sending it to a remote site. I think there is something beyond "bandwidth" issues going on. My suspicion is that regardless of the fact that my registry is local, it is still attempting to use the NIC to transfer data (which seems to make sense due to requiring a URI as the push destination and the registry being a container itself).

That being said, I can copy the same file(s) to where they will ultimately reside on the local registry orders of magnitude faster than the push command. Perhaps the solution would be just that. However, the one thing that is clear is that the problem alone is not one of bandwidth per se, but likely data path in general.

At any rate, running a local registry will not likely (totally) solve the OP's issue. While I just started to investigate, I suspect there needs to be a code change to docker in order to resolve this issue. I don't think it is a bug, but rather a design challenge. URIs and/or host<->host communications require network stacks, even when the source and destination are the same machine/host/container.


Is it was said in the previews answer, you should possibly use your local registry. It is not very hard to install and use it, here you can find the information, how you can start with it. It could be much faster, because you are not limited with upload speed limits from your provider. By the way, you can always push the image from local registry into Docker Hub or other local registry (for example, installed in your customers network).

One more thing, I could suggest, in terms of continuous integration and delivery, is to use some continuous integration server, which could automatically build your images on Linux OS, where you don't need to use boot2docker or docker-machine. For test and development purposes, you could build your images locally, without making pushes to the remote registry.


For this reason, organizations typically run their own registries on the local network. This also keeps organizations in control of their own data and avoids relying on an external service.

You will also find that cloud hosts such as Google Container Engine and the Amazon Container Service offer hosted registries to provide users with fast, local downloads.