Kubernetes memory limit : Containers with and without memory limit on same pod Kubernetes memory limit : Containers with and without memory limit on same pod kubernetes kubernetes

Kubernetes memory limit : Containers with and without memory limit on same pod


Kubernetes places your pods in Quality Of Service classes based on whether you have added requests and limits.

If all your containers in the pods have limits set, the pod falls under Guaranteed class.

If at least on container in the pod has requests(or limits) set, the pod comes under Burstable class.

If there are no requests or limits set for all container, the pods comes under Best Effort class.

In your example, your pod falls under Burstable class because C2 does not have limits set.


These requests and limits are used in two contexts - scheduling and resource exhaustion.

Scheduling

During scheduling, requests are considered to select node based on available resources. limits can be over-comitted and are not considered for scheduling`decisions.

Resource exhaustion

There are two resources on which you can specify the requests and limits natively - cpu and memory

CPU is a compressible resource i.e., kernel can throttle cpu usage of a process if required by allocating less CPU time. So a process is allowed to use as much CPU as it wants if other processes are idle. If another process needs the cpu, OS can just throttle the cpu time for the process using more CPU. The unused cpu time will be split in the ratio of their requests. If you don't want this behaviour of unlimited cpu usage i.e., you want your container to not cross certain threshold, you can set the limit.

Memory is not a compressible resource. Once allocated to a process, kernel cannot regain the memory. So if a limit is set, a process gets OOM killed if it tries to use more than the limit. If no limit is set, process can allocate as much as it wants but if there is a memory exhaustion, the only way to regain some free memory is to kill a process. This is where the QoS class come into picture. A BestEffort class container would be the first in line to get OOM killed. Next Burstable class containers would be killed before any Guaranteed class container gets killed. In situations where the containers are of same QoS class, the container using higher percentage of memory compared to its request would be OOM killed.


From what I can see with a kubectl describe nodes, the memory/cpu request/limits for the PodA are the same as the one from C1. Is that correct?

Yes

What are the memory/cpu limits for C2? Is it unbounded? Limited to the limits of PodA (e.g. limits of C1)?

CPU as a compressible resource is unbounded for all containers(or upto the limit if the limit is specified). C2 would get throttled when the other containers with requests set needs more cpu time.

Follow up of #2 -> What happens if C2 asks for more than 1Gi of memory? Will the container run out of memory, and cause the whole pod to crash? Or will it be able to grab more memory, as long as the node has free memory?

It can grab as much memory it wants. But it would be the first to get OOM killed if the nodes has no more free memory to allocate to other processes.