Facing an issue with attaching EFS volume to Kubernetes pods
AWS EFS uses NFS type volume plugin, and As per Kubernetes Storage ClassesNFS volume plugin does not come with internal Provisioner like EBS.
So the steps will be:
- Create an external Provisioner for NFS volume plugin.
- Create a storage class.
- Create one volume claim.
Use volume claim in Deployment.
In the configmap section change the file.system.id: and aws.region: to match the details of the EFS you created.
In the deployment section change the server: to the DNS endpoint of the EFS you created.
---apiVersion: v1kind: ConfigMapmetadata: name: efs-provisionerdata: file.system.id: yourEFSsystemid aws.region: regionyourEFSisin provisioner.name: example.com/aws-efs---kind: DeploymentapiVersion: extensions/v1beta1metadata: name: efs-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:latest env: - name: FILE_SYSTEM_ID valueFrom: configMapKeyRef: name: efs-provisioner key: file.system.id - name: AWS_REGION valueFrom: configMapKeyRef: name: efs-provisioner key: aws.region - name: PROVISIONER_NAME valueFrom: configMapKeyRef: name: efs-provisioner key: provisioner.name volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: yourEFSsystemID.efs.yourEFSregion.amazonaws.com path: /---kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: aws-efsprovisioner: example.com/aws-efs---kind: PersistentVolumeClaimapiVersion: v1metadata: name: efs annotations: volume.beta.kubernetes.io/storage-class: "aws-efs"spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
For more explanation and details go to https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs
The problem for me was that I was specifying a different path in my PV than /
. And the directory on the NFS server that was referenced beyond that path did not yet exist. I had to manually create that directory first.
The issue was, I had 2 ec2 instances running, but I mounted EFS volume to only one of the ec2 instances and kubectl was always deploying pods on the ec2 instance which doesn't have the mounted volume. Now I mounted the same volume to both the instances and using PVC, PV like below. It is working fine.
ec2 mounting: AWS EFS mounting with EC2
PV.yml
apiVersion: v1kind: PersistentVolumemetadata: name: efsspec: capacity: storage: 100Mi accessModes: - ReadWriteMany nfs: server: efs_public_dns.amazonaws.com path: "/"
PVC.yml
kind: PersistentVolumeClaimapiVersion: v1metadata: name: efsspec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi
replicaset.yml
----- only volume section -----
volumes: - name: test-volume persistentVolumeClaim: claimName: efs