How to explain "mount storage in a seamless manner" in terms of storage orchestraion How to explain "mount storage in a seamless manner" in terms of storage orchestraion kubernetes kubernetes

How to explain "mount storage in a seamless manner" in terms of storage orchestraion


As noted here, all those resources would be declared/mounted in the same way:

Persisting Data with Volumes

When a Pod is deleted or a container restarts, any and all data in the container’s filesystem is also deleted.

To persist data beyond the pod, use Volumes.

There are 2 additions to add volumes to pods:

  • spec.volumes
    This array defines all of the volumes that may be accessed by containers in the Pod manifest. Note that not all containers are required to mount all volumes defined in the Pod.
  • volumeMounts
    This array defines the volumes that are mounted into a particular container, and the path where each volume should be mounted. Note that two different containers in a Pod can mount the same volume at different mount paths.

So, first in spec.volumes, we define what volumes may be used by the containers in the Pod.
And, in volumeMounts, we actually use them

Example (from the same article):

apiVersion: v1kind: Podmetadata:  name: kuardspec:  volumes:    - name: "kuard-data"      hostPath:        path: "/var/lib/kuard"  containers:    - image: gcr.io/kuar-demo/kuard-amd64:1      name: kuard      volumeMounts:        - mountPath: "/data"          name: "kuard-data"      ports:        - containerPort: 8080          name: http          protocol: TCP

Here, we define kuard-data as the volume, and then mount it on the kuard container.

There are various types of volumes:

  • emptyDir
    • Such a volume is scoped to the Pod’s lifespan, but it can be shared between two containers. (in our example above, this forms the basis for communication between our Git sync and web serving containers). This survives the pod restart
  • hostDir
    • this can mount arbitrary locations on the worker node into the container
    • this was used in the example above
    • This can be used when the pod wants to direct access to the instance’s block storage for eg. But shouldn’t be used to store ordinary data since not all the hosts would have the same underlying dir structure.
  • network storage
    • if you want the data to stay with the Pod even when the pod is moved around, restarted etc, use one of the several options available in the network based storage
    • Kubernetes includes support for standard protocols such as NFS and iSCSI as well as cloud provider–based storage APIs for the major cloud providers (both public and private)

That is:

# Rest of pod definition above here   volumes:       - name: "kuard-data"         nfs:           server: my.nfs.server.local           path: "/exports"

The "seamless" part refers to a similar concept presented in "Migrating to CSI drivers from in-tree plugins"

The CSI Migration feature, when enabled, directs operations against existing in-tree plugins to corresponding CSI plugins (which are expected to be installed and configured).
The feature implements the necessary translation logic and shims to re-route the operations in a seamless fashion.
As a result, operators do not have to make any configuration changes to existing Storage Classes, PVs or PVCs (referring to in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin.

Before the introduction of CSI and Flexvolume, all volume plugins (like volume types listed above) were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API.
This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository.

So "seamless manner" here involves little or no configuration beside the initial volume declaration.