Integration with Object Storage
Container Storage Interface lets you dynamically reserve Object Storage buckets and mount them in cluster pods. You can mount existing buckets or create new ones.
To use Container Storage Interface capabilities:
See also:
- Using Container Storage Interface with a
PersistentVolumeClaim
. - Examples of creating a
PersistentVolumeClaim
.
Setting up a runtime environment
- Create a service account and add it to the
editors
group. - Create a static access key for the service account.
Set up Container Storage Interface
-
Create a file named
secret.yaml
and specify the Container Storage Interface access settings in it:--- apiVersion: v1 kind: Secret metadata: namespace: kube-system name: csi-s3-secret stringData: accessKeyID: <access key ID> secretAccessKey: <secret key> endpoint: https://storage.ai.nebius.cloud
In the
accessKeyID
andsecretAccessKey
fields, specify the previously received ID and secret key value. -
Create a file with a description of the
storageclass.yaml
storage class:--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-s3 provisioner: ru.yandex.s3.csi parameters: mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666" bucket: <optional: existing bucket name> csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret csi.storage.k8s.io/controller-publish-secret-namespace: kube-system csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret csi.storage.k8s.io/node-stage-secret-namespace: kube-system csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret csi.storage.k8s.io/node-publish-secret-namespace: kube-system
To use an existing bucket, specify its name in the
bucket
parameter. This setting is only relevant for dynamicPersistentVolumeClaim
. -
Clone the external GitHub repository
containing the Container Storage Interface driver:git clone https://github.com/yandex-cloud/k8s-csi-s3.git
-
Create resources for Container Storage Interface and your storage class:
kubectl create -f secret.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/provisioner.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/driver.yaml && \ kubectl create -f k8s-csi-s3/deploy/kubernetes/csi-s3.yaml && \ kubectl create -f storageclass.yaml
After installing the Container Storage Interface driver and configuring your storage class, you can create static and dynamic PersistentVolumeClaim
to use Object Storage buckets.
Container Storage Interface usage
With Container Storage Interface configured, there are certain things to note about creating static and dynamic PersistentVolumeClaims
.
Dynamic PersistentVolumeClaim
For dynamic PersistentVolumeClaim
:
- Specify the name of the desired storage class in the
spec.storageClassName
parameter when creating aPersistentVolumeClaim
. - If required, specify a bucket name in the
bucket
parameter when creating a storage class. This affects Container Storage Interface behavior:-
If you specified a bucket name in the
bucket
parameter when configuring your storage class, Container Storage Interface will create a separate folder in this bucket for eachPersistentVolumeClaim
created.Note
This setting can be useful if the cloud enforces strict quotas on the number of Object Storage buckets.
-
If you omitted a bucket name in the
bucket
parameter, Container Storage Interface will create a separate bucket for eachPersistentVolumeClaim
created.
-
Example of creating a dynamic PersistentVolumeClaim
.
Static PersistentVolumeClaim
For a static PersistentVolumeClaim
:
-
Leave the
spec.storageClassName
parameter empty when creatingPersistentVolumeClaim
. -
Specify the name of the desired bucket or bucket directory in the
spec.csi.volumeHandle
parameter when creatingPersistentVolume
. If there is no such bucket, create it.Note
Deleting this type of
PersistentVolume
will not automatically delete its associated bucket. -
To update GeeseFS options for working with a bucket, specify them in the
spec.csi.volumeAttributes.options
parameter when creating aPersistentVolume
. For example, in the--uid
option, you can specify the ID of the user being the owner of all files in storage. To get a list of GeeseFS options, run thegeesefs -h
command or find it in the GitHub repository .The GeeseFS options specified in the
parameters.options
parameter ofStorageClass
for staticPersistentVolumeClaims
are ignored. For more information, see the Kubernetes documentation .
Example of creating a static PersistentVolumeClaim
.
Use cases
Dynamic PersistentVolumeClaim
To use Container Storage Interface with a dynamic PersistentVolumeClaim
:
-
Create a
PersistentVolumeClaim
:-
Create a file named
pvc-dynamic.yaml
containing a description of your dynamicPersistentVolumeClaim
:pvc-dynamic.yaml--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-dynamic namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: csi-s3
If necessary, change the requested storage size in the
spec.resources.requests.storage
parameter value. -
Create a dynamic
PersistentVolumeClaim
:kubectl create -f pvc-dynamic.yaml
-
Make sure that your
PersistentVolumeClaim
has transitioned to aBound
state:kubectl get pvc csi-s3-pvc-dynamic
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-dynamic Bound pvc-<dynamic bucket name> 5Gi RWX csi-s3 73m
-
Create a pod to test your dynamic
PersistentVolumeClaim
.-
Create a file named
pod-dynamic.yaml
with the pod description:pod-dynamic.yaml--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-ubuntu-dynamic spec: containers: - name: csi-s3-test-ubuntu image: ubuntu command: ["/bin/sh"] args: ["-c", "for i in {1..10}; do echo $(date -u) >> /data/s3-dynamic/dynamic-date.txt; sleep 10; done"] volumeMounts: - mountPath: /data/s3-dynamic name: s3-volume volumes: - name: s3-volume persistentVolumeClaim: claimName: csi-s3-pvc-dynamic readOnly: false
-
Create a pod to mount a bucket to for your dynamic
PersistentVolume
:kubectl create -f pod-dynamic.yaml
-
Make sure the pod status changed to
Running
:kubectl get pods
While running, the pod will execute the
date
command several times and it write its result to a file named/data/s3-dynamic/dynamic-date.txt
. You will find this file in the bucket. -
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click the
pvc-<dynamic bucket name>
bucket.
Static PersistentVolumeClaim
To use Container Storage Interface with a static PersistentVolumeClaim
:
-
Create a
PersistentVolumeClaim
:-
Create a file named
pvc-static.yaml
containing a description of your staticPersistentVolumeClaim
:pvc-static.yaml--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-s3-pvc-static namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: ""
If necessary, change the requested storage size in the
spec.resources.requests.storage
parameter value. -
Create a file named
pv-static.yaml
containing a description of your staticPersistentVolume
:pv-static.yaml--- apiVersion: v1 kind: PersistentVolume metadata: name: s3-volume spec: storageClassName: csi-s3 capacity: storage: 10Gi accessModes: - ReadWriteMany claimRef: namespace: default name: csi-s3-pvc-static csi: driver: ru.yandex.s3.csi volumeHandle: <bucket name>/<optional: path to the folder in the bucket> controllerPublishSecretRef: name: csi-s3-secret namespace: kube-system nodePublishSecretRef: name: csi-s3-secret namespace: kube-system nodeStageSecretRef: name: csi-s3-secret namespace: kube-system volumeAttributes: capacity: 10Gi mounter: geesefs options: "--memory-limit=1000 --dir-mode=0777 --file-mode=0666 --uid=1001"
In this example, GeeseFS settings for working with a bucket are changed as compared to
StorageClass
. The--uid
option is added to them. It specifies the ID of the user being the owner of all files in storage:1001
. For more information about setting up GeeseFS for a staticPersistentVolumeClaim
, see above. -
Create a static
PersistentVolumeClaim
:kubectl create -f pvc-static.yaml
-
Create a static
PersistentVolume
:kubectl create -f pv-static.yaml
-
Make sure that your
PersistentVolumeClaim
has transitioned to aBound
state:kubectl get pvc csi-s3-pvc-static
Result:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-s3-pvc-static Bound <PersistentVolume name> 10Gi RWX csi-s3 73m
-
Create a pod to test your static
PersistentVolumeClaim
.-
Create a file named
pod-static.yaml
with the pod description:pod-static.yaml--- apiVersion: v1 kind: Pod metadata: name: csi-s3-test-ubuntu-static spec: containers: - name: csi-s3-test-ubuntu image: ubuntu command: ["/bin/sh"] args: ["-c", "for i in {1..10}; do echo $(date -u) >> /data/s3-static/static-date.txt; sleep 10; done"] volumeMounts: - mountPath: /data/s3-static name: s3-volume volumes: - name: s3-volume persistentVolumeClaim: claimName: csi-s3-pvc-static readOnly: false
-
Create a pod to mount a bucket to for your static
PersistentVolume
:kubectl create -f pod-static.yaml
-
Make sure the pod status changed to
Running
:kubectl get pods
While running, the pod will execute the
date
command several times and write its result to a file named/data/s3-static/static-date.txt
. You will find this file in the bucket. -
-
Make sure that the file is in the bucket:
- Go to the folder page and select Object Storage.
- Click on
<bucket name>
.