Kubernetes Deployment makes great use of cpu and memory without stressing it Kubernetes Deployment makes great use of cpu and memory without stressing it kubernetes kubernetes

Kubernetes Deployment makes great use of cpu and memory without stressing it


With kubectl top nodes command, i noticed that cpu and memory are increased without stressing it. Does it make sense?

Yes, it makes sense. If you will check Google Cloud about Requests and Limits

Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

But does it make sense the usage of memory/cpu to be increased from the beggining, without stressing it?

Yes as, for example your container www it can start with memory: 50Mi and cpu: 80m but its allowed to increase to memory: 100Mi and cpu: 120m. Also as you mentioned you have 15 containers in total, so depends on their request, limits it can reach more than 35% of your memory.

In HPA documentation - algorithm-details you can find information:

When a targetAverageValue or targetAverageUtilization is specified, the currentMetricValue is computed by taking the average of the given metric across all Pods in the HorizontalPodAutoscaler's scale target. Before checking the tolerance and deciding on the final values, we take pod readiness and missing metrics into consideration, however.

All Pods with a deletion timestamp set (i.e. Pods in the process of being shut down) and all failed Pods are discarded.

If a particular Pod is missing metrics, it is set aside for later; Pods with missing metrics will be used to adjust the final scaling amount.

Not sure about last question:

!!59% is the usage of memory from the pod and is described by Sum of Memory Requests/Memory(usage of memory). In my case 59% = 765Mi/1310Mi

In your HPA you set to create another pod when averageUtilization: will reach 35% of memory. It reached 59% and it created another pod. As HPA target is memory, HPA is not counting CPU at all. Also please keep in mind as this is average it needs about ~1 minute to change values.

For better understanding how HPA is working, please try this walkthrough.

If this was not helpful, please clarify what are you exact asking.