Working with snapshots
Managed Service for Kubernetes supports snapshots
To create a snapshot and then restore it:
If you no longer need the resources you created, delete them.
Getting started
-
Create a Managed Service for Kubernetes cluster and a node group in any suitable configuration with Kubernetes version 1.20 or higher.
-
Install kubectl
and configure it to work with the created cluster.
Prepare a test environment
To test snapshots, a PersistentVolumeClaim and pod are created to simulate the workload.
-
Create a file named
01-pvc.yaml
with thePersistentVolumeClaim
manifest:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-dynamic spec: accessModes: - ReadWriteOnce storageClassName: nebius-network-ssd resources: requests: storage: 5Gi
-
Create a
PersistentVolumeClaim
:kubectl apply -f 01-pvc.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-dynamic
-
Create a file named
02-pod.yaml
with thepod-source
manifest:--- apiVersion: v1 kind: Pod metadata: name: pod-source spec: containers: - name: app image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-dynamic
The pod container will write the current date and time to the
/data/out.txt
file. -
Create a pod named
pod-source
:kubectl apply -f 02-pod.yaml
-
Make sure the pod status changed to
Running
:kubectl get pod pod-source
-
Make sure the date and time are written to the
/data/out.txt
file. For this, run this command on the pod:kubectl exec pod -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Create a snapshot
-
Create a file named
03-snapshot.yaml
with the snapshot manifest:--- apiVersion: snapshot.storage.k8s.io/v1beta1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: volumeSnapshotClassName: nebius-csi-snapclass source: persistentVolumeClaimName: pvc-dynamic
-
Create a snapshot:
kubectl apply -f 03-snapshot.yaml
-
Make sure the snapshot has been created:
kubectl get volumesnapshots.snapshot.storage.k8s.io
-
Make sure the VolumeSnapshotContent
has been created:kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
Restore objects from the snapshot
When restoring objects from the snapshot
PersistentVolumeClaim
object namedpvc-restore
.- Pod named
pod-restore
with entries in the/data/out.txt
file.
To restore the snapshot:
-
Create a file named
04-restore-snapshot.yaml
with a manifest of a newPersistentVolumeClaim
:--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: storageClassName: nebius-network-ssd dataSource: name: new-snapshot-test kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Tip
You can change the size of the
PersistentVolumeClaim
being created. To do this, specify the desired size in thespec.resources.requests.storage
setting value. -
Create a new
PersistentVolumeClaim
:kubectl apply -f 04-restore-snapshot.yaml
-
Make sure the
PersistentVolumeClaim
has been created and its status isPending
:kubectl get pvc pvc-restore
-
Create a file named
05-pod-restore.yaml
with a manifest of a newpod-restore
pod:--- apiVersion: v1 kind: Pod metadata: name: pod-restore spec: containers: - name: app-restore image: ubuntu command: ["/bin/sh"] args: ["-c", "while true; do sleep 5; done"] volumeMounts: - name: persistent-storage-r mountPath: /data volumes: - name: persistent-storage-r persistentVolumeClaim: claimName: pvc-restore
The new pod container will not perform any actions with the
/data/out.txt
file. -
Create a pod named
pod-restore
:kubectl apply -f 05-pod-restore.yaml
-
Make sure the pod status changed to
Running
:kubectl get pod pod-restore
-
Make sure the new
PersistentVolumeClaim
switched to theBound
status:kubectl get pvc pvc-restore
-
Make sure the
/data/out.txt
file on the new pod contains records that thepod-source
pod container added to the file before creating the snapshot:kubectl exec pod-restore -- tail /data/out.txt
Result:
Thu Feb 3 04:55:21 UTC 2022 Thu Feb 3 04:55:26 UTC 2022 ...
Delete the resources you created
Delete the resources you no longer need to avoid paying for them: