Shared directory for a kubernetes Deployment between it's replicas Shared directory for a kubernetes Deployment between it's replicas kubernetes kubernetes

Shared directory for a kubernetes Deployment between it's replicas


First of all, you need to decide what type of a Persistent Volume to use. Here are several examples of an on-premise cluster:

  • HostPath - local Path on a Node. Therefore, if the first Pod is located on Node1 and the second is on Node2, storages will be different. To resolve this problem, you can use one of the following options. Example of a HostPath:

    kind: PersistentVolumeapiVersion: v1metadata:  name: example-pv  labels:    type: localspec:  storageClassName: manual  capacity:    storage: 3Gi  accessModes:    - ReadWriteOnce  hostPath:    path: "/mnt/data"
  • NFS - PersistentVolume of that type uses Network File System. NFS is a distributed file system protocol that allows you to mount remote directories on your servers. You need to install NFS server before using the NFS in Kubernetes; here is the example How To Set Up an NFS Mount on Ubuntu. Example in Kubernetes:

    apiVersion: v1kind: PersistentVolumemetadata:  name: example-pvspec:  capacity:    storage: 3Gi  volumeMode: Filesystem  accessModes:    - ReadWriteOnce  persistentVolumeReclaimPolicy: Recycle  storageClassName: slow  mountOptions:    - hard    - nfsvers=4.1  nfs:    path: /tmp    server: 172.17.0.2
  • GlusterFS - GlusterFS is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. As for the NFS, you need to install GlusterFS before using it in Kubernetes; here is the link with instructions, and one more with the sample. Example in Kubernetes:

    apiVersion: v1kind: PersistentVolumemetadata:  name: example-pv   annotations:    pv.beta.kubernetes.io/gid: "590" spec:  capacity:    storage: 3Gi   accessModes:     - ReadWriteMany  glusterfs:    endpoints: glusterfs-cluster     path: myVol1     readOnly: false  persistentVolumeReclaimPolicy: Retain---apiVersion: v1kind: Servicemetadata:  name: glusterfs-cluster spec:  ports:  - port: 1---apiVersion: v1kind: Endpointsmetadata:  name: glusterfs-cluster subsets:  - addresses:      - ip: 192.168.122.221     ports:      - port: 1   - addresses:      - ip: 192.168.122.222     ports:      - port: 1   - addresses:      - ip: 192.168.122.223     ports:      - port: 1 

After creating a PersistentVolume, you need to create a PersistaentVolumeClaim. A PersistaentVolumeClaim is a resource used by Pods to request volumes from the storage. After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim’s requirements. Example:

kind: PersistentVolumeClaimapiVersion: v1metadata:  name: example-pv-claimspec:  storageClassName: manual  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 3Gi

And the last step, you need to configure a Pod to use the PersistentVolumeClaim. Here is the example:

apiVersion: apps/v1kind: Deploymentmetadata:  name: 'test-tomcat'  labels:    app: test-tomcatspec:  selector:    matchLabels:      app: test-tomcat  replicas: 3  template:    metadata:      name: 'test-tomcat'      labels:        app: test-tomcat    spec:      volumes:      - name: 'data'        persistentVolumeClaim:          claimName: example-pv-claim #name of the claim should be the same as defined before      containers:      - image: 'tomcat:9-alpine'        volumeMounts:        - name: 'data'          mountPath: '/app/data'        imagePullPolicy: Always        name: 'tomcat'        command: ['bin/catalina.sh', 'jpda', 'run']