Retain Persistence Volume and use the PV for new Helm install
It's still not resolved issue by Helm.
The 'hack' to deal with it, you can find here:
https://groups.google.com/forum/#!topic/kubernetes-sig-apps/sLL2pCJ5Ab8
I found one work around.I have created a PVC independent of helm chart and just using it in my deployment.yaml file.
If there is existing claim just use the existing one otherwise create a new Claim.
{{- if .Values.persistence.enabled }} {{- if .Values.persistence.existingClaim }} persistentVolumeClaim: claimName: {{ .Values.persistence.existingClaim }} {{- else}} persistentVolumeClaim: claimName: {{ (include "mongodb.fullname" .) }} {{- end}}
The existing PV will not be able to be bound to the new PVC. However, the disk that your PV (pvc-fc29a491-499a-11e9-a426-42010a800ff9
) references can be bound to your PVC. The configuration of your new PV will depend slightly on what cloud provider/bare metal host you are using. I followed this to come to the example shown below. This example shows how to do it with a Google Cloud GCE persistent disk. The order matters here; make sure you create the PV (that references your existing persistent disk) before creating your PVC.
---apiVersion: v1kind: PersistentVolumemetadata: name: myPVspec: capacity: storage: 8Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: myPdDiskName fsType: ext4 storageClassName: standard claimRef: name: myPvcName namespace: myNameSpace---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: myPvcName namespace: myNameSpacespec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi