Unable to mount NFS on Kubernetes Pod Unable to mount NFS on Kubernetes Pod kubernetes kubernetes

Unable to mount NFS on Kubernetes Pod


I think you should check the following things to verify that NFS is mounted successfully or not

  1. run this command on the node where you want to mount.

    $showmount -e nfs-server-ip

like in my case $showmount -e 172.16.10.161Export list for 172.16.10.161:/opt/share *

  1. use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share

  2. if not mounted then use the following command

    $sudo mount -t nfs 172.16.10.161:/opt/share /opt/share

  3. if the above commands show an error then check firewall is allowing nfs or not

    $sudo ufw status

  4. if not then allow using the command

    $sudo ufw allow from nfs-server-ip to any port nfs

I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.


By default in minikube you should have default StorageClass:

Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.

For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.

Change the default StorageClass

In your example this property can lead to problems.In order to list enabled addons in minikube please use:

minikube addons list 

To list all StorageClasses in your cluster use:

kubectl get scNAME                 PROVISIONERstandard (default)   k8s.io/minikube-hostpath

Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.

In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:

kubectl get pv,pvc,scNAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGEpersistentvolume/nfs                                        3Gi        RWX            Retain           Available                                             50mpersistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871   1Gi        RWX            Delete           Bound       default/pvc-nfs   standard                50mNAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGEpersistentvolumeclaim/pvc-nfs   Bound    pvc-8aeb802f-cd95-4933-9224-eb467aaa9871   1Gi        RWX            standard       50mNAME                                             PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGEstorageclass.storage.k8s.io/standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  103m

This example will not work due to:

  • new persistentvolume/nfs has been created (without reference to pvc)
  • new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).

Solution 1.

According to the information from the comments:

Also I am able to connect to it within my minikube and also my actual ubuntu system.I you are able to mount from inside minikube host this nfs share

If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:

apiVersion: v1kind: Podmetadata:  name: test-shell  namespace: defaultspec:  volumes:  - name: pv    hostPath:      path: /path/shares # path to nfs mount point on minikube node  containers:  - name: shell    image: ubuntu    command: ["/bin/bash", "-c", "sleep 1000 "]    volumeMounts:    - name: pv      mountPath: /data

Solution 2.

If you are using PV/PVC approach:

kind: PersistentVolumeapiVersion: v1metadata:  name: persistent-volumespec:  capacity:    storage: 1Gi  accessModes:    - ReadWriteOnce  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name  nfs:    path: "/nfsroot"    server: "3.128.203.245"    readOnly: false  claimRef:    name: persistent-volume-claim    namespace: default  apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: persistent-volume-claim  namespace: defaultspec:  accessModes:    - ReadWriteOnce  resources:    requests:      storage: 1Gi  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name  volumeName: persistent-volume

Note:

If you are not referencing any provisioner associated with your StorageClassHelper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.

Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.In this case you can use:PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume

Note:

The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.

By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.

The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.

Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:

kubectl get pv,pvc,sckubectl describe pvkubectl describe pvckubectl describe pod kubectl get events