On kubernetes helm how to replace a pod with new config values On kubernetes helm how to replace a pod with new config values docker docker

On kubernetes helm how to replace a pod with new config values


We have found that using --recreate-pods will immediately terminate all running pods of that deployment, meaning some downtime for your service. In other words, there will be no rolling update of your pods.

The issue to address this in Helm is still open: https://github.com/kubernetes/helm/issues/1702

Instead helm suggests adding a checksum of your configuration files to the deployment in an annotation. That way the deployment will have a different hash and essentially look 'new' to helm, causing it to update correctly.

The sha256sum function can be used to ensure a deployment's annotation section is updated if another file changes:

kind: Deploymentspec:  template:    metadata:      annotations:        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}[...]

From the docs here: https://helm.sh/docs/charts_tips_and_tricks/#automatically-roll-deployments-when-configmaps-or-secrets-change


If you need a rolling update instead of immediatly terminating pods, add

date: "{{ .Release.Time.Seconds }}"

into the spec/template/metadata/labels.

The release will then have a config change, which triggers a rolling update if set as spec/stategy/type.

In case you just changed a ConfigMap or Secret, have a look at https://helm.sh/docs/developing_charts/#automatically-roll-deployments-when-configmaps-or-secrets-change


You can run

helm upgrade --recreate-pods

to do this.