The server must be started by the user that owns the data directory
Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine. You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata"
do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs$ sudo su -c "ls -al ./nfs" postgrestotal 8drwx------ 2 postgres postgres 4096 Jul 25 14:44 .drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..$ kubectl apply -f nfspv.yaml persistentvolume/postgres-registry-pv-volume createdpersistentvolumeclaim/postgres-registry-pv-claim created$ kubectl apply -f postgres.yaml deployment.extensions/postgres-registry created$ sudo su -c "ls -al ./nfs" postgres total 124drwx------ 19 postgres postgres 4096 Jul 25 14:46 .drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..drwx------ 3 postgres postgres 4096 Jul 25 14:46 basedrwx------ 2 postgres postgres 4096 Jul 25 14:46 globaldrwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts. . .
I noticed using nfs:
directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath:
to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 32019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections done server started$ kubectl exec -it postgres-registry-675869694-9fp52 psql psql (11.4 (Debian 11.4-1.pgdg90+1)) Type "help" for help. postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bashpostgres@postgres-registry-675869694-9fp52:/$ whoami && id -u && id -gpostgres 999 999
nfspv.yaml
:
kind: PersistentVolumeapiVersion: v1metadata: name: postgres-registry-pv-volumespec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 172.29.0.218 path: /test/nfs---kind: PersistentVolumeClaimapiVersion: v1metadata: name: postgres-registry-pv-claim labels: app: postgres-registryspec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
postgres.yaml
:
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: postgres-registryspec: replicas: 1 template: metadata: labels: app: postgres-registry spec: securityContext: runAsUser: 999 supplementalGroups: [999,1000] fsGroup: 999 containers: - name: postgres-registry image: postgres:latest imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 env: - name: POSTGRES_DB value: postgresdb - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: Sekret volumeMounts: - mountPath: /var/lib/postgresql/data name: postgresdb-registry-persistent-storage volumes: - name: postgresdb-registry-persistent-storage persistentVolumeClaim: claimName: postgres-registry-pv-claim
I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM