Why Prometheus pod pending after setup it by helm in Kubernetes cluster on Rancher server? Why Prometheus pod pending after setup it by helm in Kubernetes cluster on Rancher server? kubernetes kubernetes

Why Prometheus pod pending after setup it by helm in Kubernetes cluster on Rancher server?


I had same issues as you. I found two ways to solve this:

  • edit values.yaml under persistentVolumes.enabled=false this will allow you to use emptyDir "this applies to Prometheus-Server and AlertManager"

  • If you can't change values.yaml you will have to create the PV before deploying the chart so that the pod can bind to the volume otherwise it will stay in the pending state forever


PV are cluster scoped and PVC are namespaced scope.If your application running in a different namespace and PVC in a different namespace, it can be issue.If yes, use RBAC to give proper permissions, or put app and PVC in same namespace.

Can you make sure PV which is getting created from Storage class is the default SC of the cluster ?


I found that i was missing storage class and storage volumes. fixed similar problems on my cluster by first creating a storage class.

kubectl apply -f storageclass.ymal storageclass.ymal:    {      "kind": "StorageClass",      "apiVersion": "storage.k8s.io/v1",      "metadata": {        "name": "local-storage",        "annotations": {          "storageclass.kubernetes.io/is-default-class": "true"        }      },      "provisioner": "kubernetes.io/no-provisioner",      "reclaimPolicy": "Delete"

and the using the storage class when install Prometheus with helm

helm install stable/prometheus --set server.storageClass=local-storage

and i was also forced to create a volume for Prometheus to bind to

kubectl apply -f prometheusVolume.yamlprometheusVolume.yaml:        apiVersion: v1    kind: PersistentVolume    metadata:      name: prometheus-volume    spec:      storageClassName: local-storage      capacity:        storage: 2Gi #Size of the volume      accessModes:        - ReadWriteOnce #type of access      hostPath:        path: "/mnt/data" #host location

You could use other storage classes, found that there as a lot to chose between but then there might be other steps involved to get it working.