AI-centric cloud platform ready for intensive workloads

Training-ready platform with a wide range of NVIDIA® Tensor Core GPUs, including H100, A100 and V100.

Competitive pricing. Dedicated support.

Built for large-scale ML workloads

Get the most out of multihost training on thousands of H100 GPUs of full mesh connection with latest InfiniBand network up to 3.2Tb/s per host.

Best value for money

Save at least 50% on your GPU compute compared to major public cloud providers*. Save even more with reserves and volumes of GPUs.

Onboarding assistance

We guarantee a dedicated engineer support to ensure seamless platform adoption. Get your infrastructure optimized and k8s deployed.

Fully managed Kubernetes

Simplify the deployment, scaling and management of ML frameworks on Kubernetes and use Managed Kubernetes for multi-node GPU training.

Marketplace with ML frameworks

Explore our Marketplace with its ML-focused libraries, applications, frameworks and tools to streamline your model training.

Easy to use

Enjoy our platform UX: detailed documentation, resources management in our user-friendly cloud console, CLI or Terraform, VM access via SSH.

Deploying a knowledge-based chatbot with RAG in production

Join our hands-on webinar to explore the deployment of a knowledge-based chatbot using RAG in a production environment.

May 16, Thursday, 17:00 (GMT+2)

Get a training-ready platform, not just GPUs

Train your LLM or Generative AI model on NVIDIA® H100 Tensor Core GPUs right away

$​4.85/h

Price per 1 GPU H100 SXM5 + InfiniBand. No commitments.
Pay-as-you-go


Configuration

  • 1 × NVIDIA® H100
  • 1 × 20 Intel® Sapphire Rapids vCPUs
  • 1 × 160 GB RAM

$​​​3.76/h

Price per 1 H100 SXM5 GPU + InfiniBand.
3-month reserve, 8-32 cards


Configuration

  • 16 × NVIDIA® H100
  • 16 × 20 Intel® Sapphire Rapids vCPUs
  • 16 × 160 GB RAM

Estimated cost for 16 GPUs with 3-month reserve if running 24/7 is $131,700.

$​3.64/h

Price per 1 GPU H100 SXM5 + InfiniBand.
6-month reserve, 8-32 cards


Configuration

  • 16 × NVIDIA® H100
  • 16 × 20 Intel® Sapphire Rapids vCPUs
  • 16 × 160 GB RAM

Estimated cost for 16 GPUs with 6-month reserve if running 24/7 is $​​254,900.

Personal offer

Get the best offer for long-term commitment or large-scale AI project


  • 64+ NVIDIA® H100
  • 6 month+ reserve

Essential products to boost your model training

Services

Virtual computing and network resources that scale on demand as the basis for your AI-training project.

Resources and operations

Resources, user roles and permissions management with user-friendly services delivering secure and efficient cloud environment.

Marketplace

Fully integrated with Nebius platform services, Marketplace is your one-stop shop for any third-party solutions and software you might need in your ML/AI projects.

Ready-to-use virtual machine images and Kubernetes applications. Pre-configured environment with the most popular tools for MLOps such as Docker, Argo CD, Apache Airflow and libraries for ML such as PyTorch, CatBoost, TensorFlow, scikit-learn, Keras, CUDA, and more.

Tested with data-intensive workloads

Powered by NVIDIA, world’s leading GPU manufacturer

As an NVIDIA preferred cloud service provider, we offer access to the NGC Catalog with GPU-accelerated software that speeds up end-to-end workflows with performance-optimized containers, pre-trained ML models, and industry-specific SDKs that can be deployed in the cloud.

Start training and scaling your ML model today

Compared to Amazon Web Services and Oracle Cloud Infrastructure.

✻✻ Roadmap 2024

The provided information and prices do not constitute an offer or invitation to make offers or invitation to buy, sell or otherwise use any services, products and/or resources referred to on this website and may be changed by Nebius at any time. Contact sales to get a personalized offer.

All prices are shown without any applicable taxes, including VAT.