Docker stats show memory usage less than output of top command Docker stats show memory usage less than output of top command docker docker

Docker stats show memory usage less than output of top command


You are comparing top/htop RES mem (man):

The non-swapped physical memory a task has used.RES = CODE + DATA.

with docker stats CLI output (doc):

On Linux, the Docker CLI reports memory usage by subtracting cache usage from the total memory usage.

Use docker stats API and you will get much more granular view, e.g. stat for memory:

{    "total_pgmajfault": 0,    "cache": 0,    "mapped_file": 0,    "total_inactive_file": 0,    "pgpgout": 414,    "rss": 6537216,    "total_mapped_file": 0,    "writeback": 0,    "unevictable": 0,    "pgpgin": 477,    "total_unevictable": 0,    "pgmajfault": 0,    "total_rss": 6537216,    "total_rss_huge": 6291456,    "total_writeback": 0,    "total_inactive_anon": 0,    "rss_huge": 6291456,    "hierarchical_memory_limit": 67108864,    "total_pgfault": 964,    "total_active_file": 0,    "active_anon": 6537216,    "total_active_anon": 6537216,    "total_pgpgout": 414,    "total_cache": 0,    "inactive_anon": 0,    "active_file": 0,    "pgfault": 964,    "inactive_file": 0,    "total_pgpgin": 477}

You can see - memory is not just one, but it has many types and each tool may report own set&combination of memory types. I guess you will find missing memory in app cache memory allocation.

You can check overall basic memory allocations with free command:

$ free -m              total        used        free      shared  buff/cache   availableMem:           2000        1247          90         178         662         385Swap:             0           0           0

It is a normal state, when Linux uses unused memory for buff/cache.


docker stats is reporting the cgroup resource usage of the container's cgroup:

$ docker run -it -m 1g --cpus 1.5 --name test-stats busybox /bin/sh/ # cat /sys/fs/cgroup/memory/memory.usage_in_bytes2629632/ # cat /sys/fs/cgroup/memory/memory.limit_in_bytes1073741824/ # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us150000/ # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us100000

From another window (there's a small variation with the cat command stopped):

$ docker stats --no-stream test-statsCONTAINER ID   NAME         CPU %     MEM USAGE / LIMIT   MEM %     NET I/O         BLOCK I/O   PIDS9a69d1323422   test-stats   0.00%     2.395MiB / 1GiB     0.23%     5.46kB / 796B   3MB / 0B    1

Note that this is will differ from the overall host memory and cpu if you have specified limits with your containers. Without limits, the cpu quota will be -1 to be unrestricted, and the memory limit will set to the page counter max value.

Trying to add up memory usage from the top command is very error prone. There is different types of memory in the linux kernel (including disk cache), memory gets shared between multiple threads (which is why you likely see multiple pids for app, each with the exact same memory), some memory may be mmap that is not backed with ram, and a long list of other challenges. People that know much more about this than me will say that the kernel doesn't even know when it's actually out of memory until it attempts to reclaim memory from many process and those attempts all fail.