Kubernetes Persistent Volume Claim mounted with wrong gid Kubernetes Persistent Volume Claim mounted with wrong gid kubernetes kubernetes

Kubernetes Persistent Volume Claim mounted with wrong gid


After some experiments, finally, I can provide an answer.

There are several ways to run processes in a Container from specific UID and GID:

  1. runAsUser field in securityContext in a Pod definition specifies a user ID for the first process runs in Containers in the Pod.

  2. fsGroup field in securityContext in a Pod specifies what group ID is associated with all Containers in the Pod. This group ID is also associated with volumes mounted to the Pod and with any files created in these volumes.

  3. When a Pod consumes a PersistentVolume that has a pv.beta.kubernetes.io/gid annotation, the annotated GID is applied to all Containers in the Pod in the same way that GIDs specified in the Pod’s security context are.

Note, every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each Container.

Also, there are several ways to set up mount options for PersistentVolumes. A PersistentVolume is a piece of storage in the cluster that has been provisioned by an administrator. Also, it can be provisioned dynamically using a StorageClass. Therefore, you can specify mount options in a PersistentVolume when you create it manually. Or you can specify them in StorageClass, and every PersistentVolume requested from that class by a PersistentVolumeClaim will have these options.

It is better to use mountOptions attribute than volume.beta.kubernetes.io/mount-options annotation and storageClassName attribute instead of volume.beta.kubernetes.io/storage-class annotation. These annotations were used instead of the attributes in the past and they are still working now, however they will become fully deprecated in a future Kubernetes release. Here is an example:

kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: with-permissionsprovisioner: <your-provider>parameters:  <option-for your-provider>reclaimPolicy: RetainmountOptions: #these options  - uid=1000  - gid=1000---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: testspec:  accessModes:    - "ReadWriteOnce"  resources:    requests:      storage: "2Gi"  storageClassName: "with-permissions" #these options

Note that mount options are not validated, so mount will simply fail if one is invalid. And you can use uid=1000, gid=1000 mount options for file systems like FAT or NTFS, but not for EXT4, for example.

Referring to your configuration:

  1. In your PVC yaml volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000" is not working, because it is an option for StorageClass or PV.

  2. You specified storageClassName: "default" and volume.beta.kubernetes.io/storage-class: default in your PVC yaml, but they are doing the same. Also, default StorageClass do not have mount options by default.

  3. In your PVC yaml 'pv.beta.kubernetes.io/gid: "1000"' annotation does the same as securityContext.fsGroup: 1000 option in Deployment definition, so the first is unnecessary.

Try to create a StorageClass with required mount options (uid=1000, gid=1000), and use a PVC to request a PV from it, as in the example above. After that, you need to use a Deployment definition with SecurityContext to setup access to mounted PVC. But make sure that you are using mount options available for your file system.


You can use an initContainer to set the UID/GID permissions for the volume mount path.

The UID/GID that you see by default is due to root squash being enabled on NFS.

Steps: https://console.bluemix.net/docs/containers/cs_troubleshoot_storage.html#nonroot