How do I elegantly and safely maximize the amount of heap space allocated to a Java application in Kubernetes? How do I elegantly and safely maximize the amount of heap space allocated to a Java application in Kubernetes? kubernetes kubernetes

How do I elegantly and safely maximize the amount of heap space allocated to a Java application in Kubernetes?


The reason Kubernetes kills your pods is the resource limit. It is difficult to calculate because of container overhead and the usual mismatches between decimal and binary prefixes in specification of memory usage. My solution is to entirely drop the limit and only keep the requirement(which is what your pod will have available in any case if it is scheduled). Rely on the JVM to limit its heap via static specification and let Kubernetes manage how many pods are scheduled on a single node via resource requirement.

At first you will need to determine the actual memory usage of your container when running with your desired heap size. Run a pod with -Xmx1024m -Xms1024m and connect to the hosts docker daemon it's scheduled on. Run docker ps to find your pod and docker stats <container> to see its current memory usage wich is the sum of JVM heap, other static JVM usage like direct memory and your containers overhead(alpine with glibc). This value should only fluctuate within kibibytes because of some network usage that is handled outside the JVM. Add this value as memory requirement to your pod template.

Calculate or estimate how much memory other components on your nodes need to function properly. There will at least be the Kubernetes kubelet, the Linux kernel, its userland, probably an SSH daemon and in your case a docker daemon running on them. You can choose a generous default like 1 Gibibyte excluding the kubelet if you can spare the extra few bytes. Specify --system-reserved=1Gi and --kube-reserved=100Mi in your kubelets flags and restart it. This will add those reserved resources to the Kubernetes schedulers calculations when determining how many pods can run on a node. See the official Kubernetes documentation for more information.

This way there will probably be five to seven pods scheduled on a node with eight Gigabytes of RAM, depending on the above chosen and measured values. They will be guaranteed the RAM specified in the memory requirement and will not be terminated. Verify the memory usage via kubectl describe node under Allocated resources. As for elegancy/flexibility, you just need to adjust the memory requirement and JVM heap size if you want to increase RAM available to your application.

This approach only works assuming that the pods memory usage will not explode, if it would not be limited by the JVM a rouge pod might cause eviction, see out of resource handling.


What we do in our case is we launch with high memory limit on kubernetes, observe over time under load and either tune memory usage to the level we want to reach with -Xmx or adapt memory limits (and requests) to the real memory consumption. Truth be told, we usually use the mix of both approaches. The key to this method is to have a decent monitoring enabled on your cluster (Prometheus in our case), if you want high level of finetuning you might also want to add something like a JMX prometheus exporter, to have a detailed insight into metrics when tuning your setup.


I think the issue here is that the kubernetes memory limits are for the container and MaxRAMFraction is for jvm. So, if jvm heap is the same as kubernetes limits then there wont be enough memory left for the container itself.

One thing you can try is increasing

limits:  memory: 2048Mi

keeping requests limit the same. Fundamental difference between requests and limits is that requests will let you go over the limit if there is memory available at the node level while limits is a hard limit. This may not be the ideal solution and you will have to figure out how much memory is your pod consuming on top of jvm, but as a quick fix increasing limits should work.