In the field of computer vision, selecting the appropriate hardware can be tricky due to the variety of models and their different architectures. Today’s article explores the criteria for selecting the best GPU for CV.
Our main news of the past month is that Nebius AI has become available to everyone! We also participated in the MLOps podcast and published several videos about setting up training, as well as stories about how Nebius AI clients are building their models.
Just recently, we added Kubeflow, an open-source platform dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.
Model operationalization (ModelOps) is an ideology that aims to streamline the development and deployment process for AI applications. Let’s get deeper into the topic, understand differences between ModelOps and MLOps, and explore ModelOps use cases.
We’re excited to announce that our platform is now officially open to everybody. Whether you are a company or an individual engineer, access the GPU cloud console straight away and start running your machine learning experiments.
In our field, effective partnerships that harness complementary strengths can drive significant breakthroughs. Such is the case with the collaboration between Nebius AI and Unum, an AI research lab known for developing compact and efficient AI models.
Recraft, recently funded in a round led by Khosla Ventures and former GitHub CEO Nat Friedman, is the first generative AI model built for designers. Featuring 20 billion parameters, the model was trained from scratch on Nebius AI. Here’s how.
Retrieval-augmented generation (RAG) is a technique that enhances language models by combining generative AI with a retrieval component. Let’s examine a quick example of applying RAG in a real-world context.
March was a busy month for us: we opened access to Managed databases, hosted a webinar on Slurm vs Kubernetes, published new handy guides in our documentation and several ML-focused articles on the blog.
It’s an unexpected day to reflect on what has been going on at our company, right? Still, we thought, why not share a few facts — a blend of the earnest and the slightly amusing ones.
With this article, we are starting a new category on our blog, the one dedicated to AI research. Expect these posts to be very technical and insightful. The first one is about possible alternatives to the key component of the LLM architecture.
With a keen eye on power usage effectiveness, Nebius AI is excited to be among the first cloud providers adopting NVIDIA® B200 Tensor Core GPUs and offering the advanced, energy-efficient technology to customers.