Examples are availabe in examples folder:
- Simple Persistent Volume
- Template with Persistent Volume
- AWS-based cluster with data replication and Persistent Volumes minimal and medium Zookeeper installations
k8s cluster administrator provision storage with PersistentVolume
objects to users.
Users claim storage with PersistentVolumeClaim
objects and then mount claimed PersistentVolume
s into filesystem with volumeMounts
+volumes
.
PersistentVolume
can be created as:
- Manual volume provisioning. Cluster administrator manually make calls to storage (cloud) provider to provision new storage volumes, and then create
PersistentVolume
objects to represent those volumes in Kubernetes. Users claim thosePersistentVolume
s later withPersistentVolumeClaim
s - Dynamic volume provisioning. No need for cluster administrators to pre-provision storage.
Storage resources dynamically provisioned with the provisioner specified by the
StorageClass
object.StorageClass
es abstract the underlying storage provider with all parameters (such as disk type or location).StorageClass
es use software modules - provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used.
Users refers StorageClass
by name in the PersistentVolumeClaim
with storageClassName
parameter.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: mytestns
spec:
storageClassName: mystorageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Storage class name - mystorageclass
in this example - is specific for each k8s installation and have to be provided (announced to users) by cluster administrator.
However, this is not convenient and sometimes we'd like to just use any available storage, without bothering to know what storage classes are available in this k8s installation.
The cluster administrator have an option to specify a default StorageClass
.
When present, the user can create a PersistentVolumeClaim
without having specifying a storageClassName
, simplifying the process and reducing required knowledge of the underlying storage provider.
Important notes on PersistentVolumeClaim
- if
storageClassName
is not specified, defaultStorageClass
(must be specified by cluster administrator) would be used for provisioning - if
storageClassName
is set to an empty string (‘’), noStorageClass
will be used and dynamic provisioning is disabled for thisPersistentVolumeClaim
. Available PVs that do not have anystorageClassName
specified will be considered for binding to the PVC - if
storageClassName
is set, then the matchingStorageClass
will be used
We can use kubectl
to check for StorageClass
objects. Here we use cluster created with kops
kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER AGE
default kubernetes.io/aws-ebs 1d
gp2 (default) kubernetes.io/aws-ebs 1d
We can see two storage classes available:
- named as default
- named as gp2 which is the default
StorageClass
We can take a look inside them as:
kubectl get storageclasses.storage.k8s.io default -o yaml
kubectl get storageclasses.storage.k8s.io gp2 -o yaml
What we can see, that, actually, those StorageClass
es are equal:
metadata:
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2
parameters:
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
metadata:
labels:
k8s-addon: storage-aws.addons.k8s.io
name: default
parameters:
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
What does this mean - we can specify our PersistentVolumeClaim
object with either:
- no
storageClassName
specified (just omit this field) - and in this case gp2 would be used (because it is the default one) or - specify
storageClassName: default
and in this case StorageClass
named as default would be used, providing the same result as gp2 (which is actually the default StorageClass
)
Pods use PersistentVolumeClaim
as volume.
PersistentVolumeClaim
must exist in the same namespace as the pod using the claim.
The cluster inspects the PersistentVolumeClaim
to find appropriate PersistentVolume
and mounts that PersistentVolume
into pod's filesystem via volumeMounts
.
Pod -> via "volumeMounts: name" refers -> "volumes: name" in Pod or Pod Template as:
containers:
- name: myclickhouse
image: clickhouse
volumeMounts:
- mountPath: "/var/lib/clickhouse"
name: myvolume
This "volume" definition can either be the final object description as:
volumes:
- name: myvolume
emptyDir: {}
volumes:
- name: myvolume
hostPath:
path: /local/path/
or can refer to PersistentVolumeClaim
as:
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myclaim
where minimal PersistentVolumeClaim
can be specified as following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: www
persistentVolumeClaim:
claimName: myclaim
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
Pay attention, that there is no storageClassName
specified - meaning this PersistentVolumeClaim
will claim PersistentVolume
of explicitly specified default StorageClass
.
More details on storageClassName
More details on PersistentVolumeClaim
StatefulSet
shortcuts the way, jumping from volumeMounts
directly to volumeClaimTemplates
, skipping volume
.
More details in StatefulSet description
StatefulSet example:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Pay attention to .spec.template.spec.containers.volumeMounts
:
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
refers directly to:
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi