AI-centric cloud platform ready for intensive workloads

Get NVIDIA H100 from $2.12 per hour*. Training-ready platform with competitive pricing and wide range of NVIDIA® Tensor Core GPUs: H100, A100, L40S and V100.

Built for large-scale ML workloads

Get the most out of multihost training on thousands of H100 GPUs of full mesh connection with latest InfiniBand network up to 3.2Tb/s per host.

Best value for money

Save at least 50% on your GPU compute compared to major public cloud providers***. Save even more with reserves and volumes of GPUs.

Onboarding assistance

We guarantee a dedicated engineer support to ensure seamless platform adoption. Get your infrastructure optimized and k8s deployed.

Fully managed Kubernetes

Simplify the deployment, scaling and management of ML frameworks on Kubernetes and use Managed Kubernetes for multi-node GPU training.

Marketplace with ML frameworks

Explore our Marketplace with its ML-focused libraries, applications, frameworks and tools to streamline your model training.

Easy to use

Enjoy our platform UX: detailed documentation, resources management in our user-friendly cloud console, CLI or Terraform, VM access via SSH.

Deploying a knowledge-based chatbot with RAG in production

Join our hands-on webinar to explore the deployment of a knowledge-based chatbot using RAG in a production environment.

May 16, Thursday, 17:00 (GMT+2)

Get a training-ready platform, not just GPUs

Train your LLM or Generative AI model on NVIDIA® H100 Tensor Core GPUs right away

$​2.12/h

Price per 1 GPU H100 SXM5 + InfiniBand.
3-year reserve*


Configuration

  • NVIDIA® H100
  • 20 Intel® Sapphire Rapids vCPUs
  • 160 GB RAM

$​3.64/h

Price per 1 GPU H100 SXM5 + InfiniBand.
6-month reserve, 8-32 cards


Configuration

  • 16 × NVIDIA® H100
  • 16 × 20 Intel® Sapphire Rapids vCPUs
  • 16 × 160 GB RAM

Estimated cost for 16 GPUs with 6-month reserve if running 24/7 is $​​254,900.

$​​​3.76/h

Price per 1 H100 SXM5 GPU + InfiniBand.
3-month reserve, 8-32 cards


Configuration

  • 16 × NVIDIA® H100
  • 16 × 20 Intel® Sapphire Rapids vCPUs
  • 16 × 160 GB RAM

Estimated cost for 16 GPUs with 3-month reserve if running 24/7 is $131,700.

$​4.85/h

Price per 1 GPU H100 SXM5 + InfiniBand. No commitments.
Pay-as-you-go


Configuration

  • 1 × NVIDIA® H100
  • 1 × 20 Intel® Sapphire Rapids vCPUs
  • 1 × 160 GB RAM

Essential products to boost your model training

Infrastructure & Network

Computing and network resources that scale on demand as the basis for your AI-training.

Managed databases

Marketplace

Fully integrated with Nebius platform services, Marketplace is your one-stop shop for any third-party solutions and software you might need in your ML/AI projects.

Ready-to-use virtual machine images and Kubernetes applications. Pre-configured environment with the most popular tools for MLOps such as Docker, Argo CD, Apache Airflow™ and libraries for ML such as PyTorch, CatBoost, TensorFlow, scikit-learn, Keras, CUDA, and more.

Tested with data-intensive workloads

Powered by NVIDIA, world’s leading GPU manufacturer

As an NVIDIA preferred cloud service provider, we offer access to the NGC Catalog with GPU-accelerated software that speeds up end-to-end workflows with performance-optimized containers, pre-trained ML models, and industry-specific SDKs that can be deployed in the cloud.

Start training and scaling your ML model today

The price is valid for a 3-year reserve with 50% prepayment.

✻✻ Roadmap 2024

✻✻✻ Compared to Amazon Web Services and Oracle Cloud Infrastructure.

The provided information and prices do not constitute an offer or invitation to make offers or invitation to buy, sell or otherwise use any services, products and/or resources referred to on this website and may be changed by Nebius at any time. Contact sales to get a personalized offer.

All prices are shown without any applicable taxes, including VAT.