Sharing CPU limits for containers within a pod Sharing CPU limits for containers within a pod kubernetes kubernetes

Sharing CPU limits for containers within a pod


As per documentation: resource constraints are only applicable on container level. You can however define different requests and limits to allow the container to burst beyond the amount defined in requests. But this comes with other implications see Quality of Service.

The reason for this is that some resources such as memory cannot be competed about, as it works for CPU. Memory is either enough or too less. There is no such thing in Kubernetes as shared RAM. (If your are not explicitly call the relevant systemcalls)

May I ask, what the use case for Pod internal CPU competition is?


How about controlling resource usage inside your K8S cluster with resource quota. This should enable you to benchmark cpu/memory usage by your pod inside dedicated namespace with help of kube_resourcequota monitoring metrics, under different conditions set with LimitRange or directly with Container`s resources limits & requests.

What I mean exactly is to set resource quota similar to this one:

apiVersion: v1kind: ResourceQuotametadata:  name: mem-cpu-demospec:  hard:    requests.cpu: "1"    requests.memory: 1Gi    limits.cpu: "2"    limits.memory: 2Gi    pods: "1"

run pod with resource limits and requests:

 ... containers:    - image: gcr.io/google-samples/hello-app:1.0      imagePullPolicy: IfNotPresent      name: hello-app      ports:      - containerPort: 8080        protocol: TCP      resources:        limits:          cpu: "1"          memory: 800Mi        requests:          cpu: 900m          memory: 600Mi   ...

and just observe in monitoring console how the pod performs* for instance with Prometheus:

enter image description here

*Green - represents overall memory usage by Pod, Red - fixed/hard resource limits set with ResourceQuota

I guess you opt for reducing the gap between lines to avoid undercommitted system, and at the same time avoid Pod failures like this one:

  status:    message: 'Pod Node didn''t have enough resource: cpu, requested: 400, used: 893,      capacity: 940'    phase: Failed    reason: OutOfcpu

Of course ideally would be if this memory usage trend were stack on cockpit chart with some other custom/performance monitoring metric of your interest.