Handling PersistentVolumeClaim in DaemonSet
If you use a persistentVolumeClaim in your daemonset definition, and the persistentVolumeClaim is satisfied with PV with the type of hostPath
, your daemon pods will read and write to the local path defined by hostPath
. This behavior will help you separate the storage using one PVC.
This might not directly apply to your situation but I hope this helps whoever searching for something like a "volumeClaimTemplate for DaemonSet" in the future.
Using the same example as cookiedough (thank you!)
apiVersion: apps/v1kind: DaemonSetmetadata: name: x namespace: x labels: k8s-app: xspec: selector: matchLabels: name: x template: metadata: labels: name: x spec: ... containers: - name: x ... volumeMounts: - name: volume mountPath: /var/log volumes: - name: volume persistentVolumeClaim: claimName: my-pvc
And that PVC is bound to a PV (Note that there is only one PVC and one PV!)
apiVersion: v1kind: PersistentVolumemetadata: creationTimestamp: null labels: type: local name: memspec: accessModes: - ReadWriteOnce capacity: storage: 1Gi hostPath: path: /tmp/mem type: Directory storageClassName: standardstatus: {}
Your daemon pods will actually use /tmp/mem
on each node. (There's at most 1 daemon pod on each node so that's fine.)
The way to attach a PVC to your DaemonSet pod is not any different than how you do it with other types of pods. Create your PVC and mount it as a volume onto the pod.
kind: PersistentVolumeClaimapiVersion: v1metadata: name: my-pvc namespace: xspec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
This is what the DaemonSet manifest would look like:
apiVersion: apps/v1kind: DaemonSetmetadata: name: x namespace: x labels: k8s-app: xspec: selector: matchLabels: name: x template: metadata: labels: name: x spec: ... containers: - name: x ... volumeMounts: - name: volume mountPath: /var/log volumes: - name: volume persistentVolumeClaim: claimName: my-pvc