Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container docker docker

Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container


You have some clue in "Analyzing java memory usage in a Docker container" from Mikhail Krestjaninoff:

(And to be clear, in May 2019, three years later, the situation does improves with openJDK 8u212 )

Resident Set Size is the amount of physical memory currently allocated and used by a process (without swapped out pages). It includes the code, data and shared libraries (which are counted in every process which uses them)

Why does docker stats info differ from the ps data?

Answer for the first question is very simple - Docker has a bug (or a feature - depends on your mood): it includes file caches into the total memory usage info. So, we can just avoid this metric and use ps info about RSS.

Well, ok - but why is RSS higher than Xmx?

Theoretically, in case of a java application

RSS = Heap size + MetaSpace + OffHeap size

where OffHeap consists of thread stacks, direct buffers, mapped files (libraries and jars) and JVM code itse

Since JDK 1.8.40 we have Native Memory Tracker!

As you can see, I’ve already added -XX:NativeMemoryTracking=summary property to the JVM, so we can just invoke it from the command line:

docker exec my-app jcmd 1 VM.native_memory summary

(This is what the OP did)

Don’t worry about the “Unknown” section - seems that NMT is an immature tool and can’t deal with CMS GC (this section disappears when you use an another GC).

Keep in mind, that NMT displays “committed” memory, not "resident" (which you get through the ps command). In other words, a memory page can be committed without considering as a resident (until it directly accessed).

That means that NMT results for non-heap areas (heap is always preinitialized) might be bigger than RSS values.

(that is where "Why does a JVM report more committed memory than the linux process resident set size?" comes in)

As a result, despite the fact that we set the jvm heap limit to 256m, our application consumes 367M. The “other” 164M are mostly used for storing class metadata, compiled code, threads and GC data.

First three points are often constants for an application, so the only thing which increases with the heap size is GC data.
This dependency is linear, but the “k” coefficient (y = kx + b) is much less then 1.


More generally, this seems to be followed by issue 15020 which reports a similar issue since docker 1.7

I'm running a simple Scala (JVM) application which loads a lot of data into and out of memory.
I set the JVM to 8G heap (-Xmx8G). I have a machine with 132G memory, and it can't handle more than 7-8 containers because they grow well past the 8G limit I imposed on the JVM.

(docker stat was reported as misleading before, as it apparently includes file caches into the total memory usage info)

docker stat shows that each container itself is using much more memory than the JVM is supposed to be using. For instance:

CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/Odave-1 3.55% 10.61 GB/135.3 GB 7.85% 7.132 MB/959.9 MBperf-1 3.63% 16.51 GB/135.3 GB 12.21% 30.71 MB/5.115 GB

It almost seems that the JVM is asking the OS for memory, which is allocated within the container, and the JVM is freeing memory as its GC runs, but the container doesn't release the memory back to the main OS. So... memory leak.