AWS Command Line Interface (AWS CLI)
AWS CLI
To work with Object Storage via the AWS CLI, you can use the following sets of commands:
- s3
: Additional commands that make it easier to work with a large number of objects.
Before you start
- Create a service account.
- Add the service account to a group to grant it necessary permissions.
- Create a static access key.
Installing
To install the AWS CLI, use the instructions
Setup
To configure the AWS CLI, enter the aws configure
command. The command requests values for the following parameters:
-
AWS Access Key ID
: ID of a static key created when getting started. -
AWS Secret Access Key
: the contents of the static key. -
Default region name
:eu-north1
region.Note
To work with Object Storage, always specify
eu-north1
as the region. A different region value may lead to an authorization error. -
Leave the other parameter values unchanged.
Configuration files
The aws configure
command saves the following data:
-
Your static key to the
.aws/credentials
file in the format:[default] aws_access_key_id = id aws_secret_access_key = secretKey
-
The default region to the
.aws/config
file in the format:[default] region=eu-north1
Specifics
When using the AWS CLI to work with Object Storage, keep the following in mind:
-
The AWS CLI treats Object Storage like a hierarchical file system and object keys look like file paths.
-
When running the
aws
command to work with Object Storage, the--endpoint-url
parameter is required because the client is configured to work with the Amazon servers by default. To avoid specifying the parameter manually at each run, create an alias, for example:alias nbais3='aws s3 --endpoint-url=https://storage.ai.nebius.cloud'
The following two commands have the same authority with this alias:
aws s3 --endpoint-url=https://storage.ai.nebius.cloud ls
nbais3 ls
To have an alias created each time the terminal runs, add the
alias
command to the configuration file~/.bashrc
or~/.zshrc
, depending on the type of shell. -
When using macOS, in some cases you need to run the command:
export PYTHONPATH=/Library/Python/2.7/site-packages; aws --endpoint-url=https://storage.ai.nebius.cloud s3 ls
Example operations
Note
To enable debug output in the console, use the --debug
key.
Create a bucket
aws --endpoint-url=https://storage.ai.nebius.cloud s3 mb s3://bucket-name
Result:
make_bucket: bucket-name
Note
When creating a bucket, follow the naming conventions.
Uploading objects
You can upload objects using one of the following methods:
-
Upload all objects from a local directory:
aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 cp --recursive local_files/ s3://bucket-name/path_style_prefix/
Result:
upload: ./textfile1.log to s3://bucket-name/path_style_prefix/textfile1.log upload: ./textfile2.txt to s3://bucket-name/path_style_prefix/textfile2.txt upload: ./prefix/textfile3.txt to s3://bucket-name/path_style_prefix/prefix/textfile3.txt
-
Upload objects specified in the
--include
filter and skip objects specified in the--exclude
filter:aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 cp --recursive --exclude "*" --include "*.log" \ local_files/ s3://bucket-name/path_style_prefix/
Result:
upload: ./textfile1.log to s3://bucket-name/path_style_prefix/textfile1.log
-
Upload objects one by one, running the following command for each object:
aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 cp testfile.txt s3://bucket-name/path_style_prefix/textfile.txt
Result:
upload: ./testfile.txt to s3://bucket-name/path_style_prefix/textfile.txt
Getting a list of objects
aws --endpoint-url=https://storage.ai.nebius.cloud \
s3 ls --recursive s3://bucket-name
Result:
2022-09-05 17:10:34 10023 other/test1.png
2022-09-05 17:10:34 57898 other/test2.png
2022-09-05 17:10:34 704651 test.png
Deleting objects
You can delete objects using one of the following methods:
-
Delete all objects with the specified prefix:
aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 rm s3://bucket-name/path_style_prefix/ --recursive
Result:
delete: s3://bucket-name/path_style_prefix/test1.png delete: s3://bucket-name/path_style_prefix/subprefix/test2.png
-
Delete objects specified in the
--include
filter and skip objects specified in the--exclude
filter:aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 rm s3://bucket-name/path_style_prefix/ --recursive \ --exclude "*" --include "*.log"
Result:
delete: s3://bucket-name/path_style_prefix/test1.log delete: s3://bucket-name/path_style_prefix/subprefix/test2.log
-
Delete objects one by one, running the following command for each object:
aws --endpoint-url=https://storage.ai.nebius.cloud \ s3 rm s3://bucket-name/path_style_prefix/textfile.txt
Result:
delete: s3://bucket-name/path_style_prefix/textfile.txt
Retrieving an object
aws --endpoint-url=https://storage.ai.nebius.cloud \
s3 cp s3://bucket-name/textfile.txt textfile.txt
Result:
download: s3://bucket-name/path_style_prefix/textfile.txt to ./textfile.txt