CephFS Unable to attach or mount volumes: unmounted volumes=[image-store] CephFS Unable to attach or mount volumes: unmounted volumes=[image-store] kubernetes kubernetes

CephFS Unable to attach or mount volumes: unmounted volumes=[image-store]


In the pasted YAML for your StorageClass, you have:

reclaimPolicy: Deletea

Was that a paste issue? Regardless, this is likely what is causing your problem.

I just had this exact problem with some of my Ceph RBD volumes, and the reason for it was that I was using a StorageClass that had

reclaimPolicy: Delete

However, the cephcsi driver was not configured to support it (and I don't think it actually supports it either).

Using a StorageClass with

reclaimPolicy: Retain

fixed the issue.

To check this on your cluster, run the following:

$ kubectl get sc rook-cephfs -o yaml

And look for the line that starts with reclaimPolicy:

Then, look at the csidriver your StorageClass is using. In your case it is rook-ceph.cephfs.csi.ceph.com

$ kubectl get csidriver rook-ceph.cephfs.csi.ceph.com -o yaml

And look for the entries under volumeLifecycleModes

apiVersion: storage.k8s.io/v1beta1kind: CSIDrivermetadata:  creationTimestamp: "2020-11-16T22:18:55Z"  name: rook-ceph.cephfs.csi.ceph.com  resourceVersion: "29863971"  selfLink: /apis/storage.k8s.io/v1beta1/csidrivers/rook-ceph.cephfs.csi.ceph.com  uid: a9651d30-935d-4a7d-a7c9-53d5bc90c28cspec:  attachRequired: true  podInfoOnMount: false  volumeLifecycleModes:  - Persistent

If the only entry under volumeLifecycleModes is Persistent, then your driver is not configured to support reclaimPolicy: Delete.

If instead you see

volumeLifecycleModes:    - Persistent    - Ephemeral

Then your driver should support reclaimPolicy: Delete