- A volume is a persistent storage which could be created and mounted at a location inside the containers of a pod. This allows the pod to persist the storage at that location even if it is restarted.
- Volume could be:
- Local (on the same node as the pod) - This is not acceptable if the cluster has multiple worker nodes as each node will store different data in their volumes.
- Remote (outside the cluster) - This works with multiple worker nodes as the storage is being managed remotely. The remote storage provider must follow the Container Storage Interface (CSI) standards.
Creating a local volume on the node
The pod definition file below creates a volume at location /data
on the node and mounts it to the location /opt
in the container. The volume is created at the pod level and it mounted at the container level.
apiVersion: v1
kind: Pod
metadata:
labels:
name: frontend
spec:
containers:
- name: httpd
image: httpd:2.4-alpine
volumeMounts:
- name: data-volume
mountPath: /opt
volumes:
- name: data-volume
hostPath:
path: /data
type: Directory
Creating a shared remote volume on EBS
The pod definition file below creates a volume on EBS and mounts it to the location /opt
in the container. Even if the pods are running on multiple nodes, they will still read the same data.
apiVersion: v1
kind: Pod
metadata:
labels:
name: frontend
spec:
containers:
- name: httpd
image: httpd:2.4-alpine
volumeMounts:
- name: data-volume
mountPath: /opt
volumes:
- name: data-volume
awsElasticBlockStore:
volumeId: <volume-id>
fsType: ext4
<aside>
⚠️ Configuring volumes at the pod level (in every pod definition file) is not the right way. If we want to switch all the volumes from local to remote, we need to update every pod definition file.
</aside>
Persistent Volumes
- Persistent Volumes (PVs) are cluster wide storage volumes configured by the admin. This allows the volumes to be centrally configured and managed by the admin. The developer creating application (pods) can claim these persistent volumes by creating Persistent Volume Claims (PVCs).
- Once the PVCs are created, K8s binds the PVCs with available PVs based on the requests in the PVCs and the properties set on the volumes. A PVC can bind with a single PV only (there is a 1:1 relationship between a PV and a PVC). If multiple PVs match a PVC, we can label a PV and select it using label selectors in PVC.
- A smaller PVC can bind to a larger PV if all the other criteria in the PVC match the PV’s properties and there is no better option.
- When a PV is created, it is in Available state until a PVC binds to it, after which it goes into Bound state. If the PVC is deleted while the reclaim policy was set to
Retain
, the PV goes into Released state.
- If no PV matches the given criteria for a PVC, the PVC remains in Pending state until a PV is created that matches its criteria. After this, the PVC will be bound to the newly created PV.
- The properties involved in binding between PV and PVC are: Capacity, Access Modes, Volume Modes, Storage Class and Selector.
- List persistent volumes -
k get persistentvolume
or k get pv