Kubectl rollout restart for statefulset Kubectl rollout restart for statefulset kubernetes kubernetes

Kubectl rollout restart for statefulset


You did not provide whole scenario. It might depends on Readiness Probe or Update Strategy.

StatefulSet restart pods from index 0 to n-1. Details can be found here.

Reason 1*

Statefulset have 4 update strategies.

  • On Delete
  • Rolling Updates
  • Partitions
  • Forced Rollback

In Partition update you can find information that:

If a partition is specified, all Pods with an ordinal that is greater than or equal to the partition will be updated when the StatefulSet’s .spec.template is updated. All Pods with an ordinal that is less than the partition will not be updated, and, even if they are deleted, they will be recreated at the previous version. If a StatefulSet’s .spec.updateStrategy.rollingUpdate.partition is greater than its .spec.replicas, updates to its .spec.template will not be propagated to its Pods. In most cases you will not need to use a partition, but they are useful if you want to stage an update, roll out a canary, or perform a phased roll out.

So if somewhere in StatefulSet you have set updateStrategy.rollingUpdate.partition: 1 it will restart all pods with index 1 or higher.

Example of partition: 3

NAME    READY   STATUS    RESTARTS   AGEweb-0   1/1     Running   0          30mweb-1   1/1     Running   0          30mweb-2   1/1     Running   0          31mweb-3   1/1     Running   0          2m45sweb-4   1/1     Running   0          3mweb-5   1/1     Running   0          3m13s

Reason 2

Configuration of Readiness probe.

If your values of initialDelaySeconds and periodSeconds are high, it might take a while before another one will be restarted. Details about those parameters can be found here.

In below example, pod will wait 10 seconds it will be running, and readiness probe is checking this each 2 seconds. Depends on values it might be cause of this behavior.

    readinessProbe:      failureThreshold: 3      httpGet:        path: /        port: 80        scheme: HTTP      initialDelaySeconds: 10      periodSeconds: 2      successThreshold: 1      timeoutSeconds: 1

Reason 3

I saw that you have 2 containers in each pod.

NAME                  READY   STATUS    RESTARTS   AGEalertmanager-main-0   2/2     Running   0          21halertmanager-main-1   2/2     Running   0          20s

As describe in docs:

Running - The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.

It would be good to check if everything is ok with both containers (readinessProbe/livenessProbe, restarts etc.)


You would need to delete it. Stateful set are removed following their ordinal index with the highest ordinal index first.

Also you do not need to restart pod to re-read updated config map. This is happening automatically (after some period of time).