How to reference a local volume in Kind (kubernetes in docker)
When you create your kind cluster you can specify host directories to be mounted on a virtual node. If you do that, then you can configure volumes with hostPath
storage, and they will refer to the mount paths on the node.
So you would create a kind config file:
apiVersion: kind.x-k8s.io/v1alpha4kind: Clusternodes: - role: control-plane extraMounts: - hostPath: /home/bill/work/foo containerPath: /foo
and then run
kind create cluster --config kind-config.yaml
to create the cluster.
In your Kubernetes YAML file, you need to mount that containerPath
as a "host path" on the node. A pod spec might contain in part:
volumes: - name: foo hostPath: path: /foo # matches kind containerPath:containers: - name: foo volumeMounts: - name: foo mountPath: /data # in the container filesystem
Note that this setup is extremely specific to kind. Host paths aren't reliable storage in general: you can't control which node a pod gets scheduled on, and both pods and nodes can get deleted in real-world clusters. In some hosted setups (AWS EKS, Google GKE) you may not be able to control the host content at all.
You might revisit your application design to minimize the need for "files" as first-class objects. Rather than "update the volume" consider deploying a new Docker image with updated content; rather than "copy files out" consider an HTTP service you can expose through an ingress controller.
I would like to add that to minimise the specific configuration to Kind
you should use pv / pvc
this way the configuration on a real cluster will only differ in the definition of pv.
So if you configure extraMounts on your Kind cluster:
apiVersion: kind.x-k8s.io/v1alpha4kind: Clusternodes:- role: control-plane extraMounts: - hostPath: /home/bill/work/www containerPath: /www
Then on that cluster create PV and PVC:
---apiVersion: v1kind: PersistentVolumemetadata: name: pv-wwwspec: storageClassName: standard accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /www/---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-wwwspec: volumeName: pv-www accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
After that you can use it in deployment like this:
---apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: volumes: - name: www persistentVolumeClaim: claimName: pvc-www containers: - name: nginx image: nginx:1.14.2 volumeMounts: - name: www mountPath: /var/www
As a result your local /home/bill/work/www
will be mounted to /var/www
inside containers.
From the example above
... extraMounts: - hostPath: /home/bill/work/www containerPath: /www...
So the path in your host (Laptop) is /home/bill/work/www
and the path in the kubernetes node is /www
You are running kind and can make use of the fact it's running docker to check the nodes. Do a
docker ps -a
This will show you the kind docker images, which are all kubernetes nodes. So you can check the nodes by taking a CONTAINER_ID take from docker ps -a from above and do a
docker exec -it CONTAINER_ID /bin/bash
So now you have a shell running on that node. Check if the node has mounted you host filesystem properly
Just check with
ls /www
on the node. You should see the content of /home/bill/work/www
So what you have archived is that this part of the node filesystem is persisted by the host (Laptop). So you can destroy the cluster and recreate it with the same kind-config file. The node will remount and no information is lost.
So with this working setup you can make a persistan volume (pv) and claim this pv with a persistent volume claim (pvc) as described above.
Hope this helps.