Kubernates autoscale memory Kubernates autoscale memory kubernetes kubernetes

Kubernates autoscale memory


That looks correct but I'm taking a while guess because you didn't share the output of kubectl top pods. It could be that your deployment is not scaling because of memory utilization but because of CPU utilization first.

If you see the docs the first metrics that hits the target starts the autoscaling process:

Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the autoscaling/v2beta2 API version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the proposed scales will be used as the new scale

You could also try a Value metric for your memory target to troubleshoot:

  metrics:    - type: Resource      resource:        name: cpu        targetAverageUtilization: 60    - type: Resource      resource:        name: memory        targetAverageValue: 700M

A good way to see the current metrics is to just get the status on a full output on the HPA:

$ kubectl get hpa <hpa-name> -o=yaml