Updated July 22, 2024

MLflow is a platform for managing workflows and artifacts across the machine learning lifecycle, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. It has built-in integrations with many popular ML libraries (TensorFlow, PyTorch, XGBoost, etc), but can be used with any library, algorithm, or deployment tool. MLflow’s components are:

  • MLflow Tracking: An API for logging parameters, code versions, metrics, model environment dependencies, and model artifacts when running your machine learning code.
  • MLflow Models: A model packaging format and suite of tools that let you easily deploy a trained model for batch or real-time inference.
  • MLflow Model Registry: A centralized model store, set of APIs, and UI focused on the approval, quality assurance, and deployment of an MLflow Model.
  • MLflow Projects: A standard format for packaging reusable data science code that can be run with different parameters to train models, visualize data, or perform any other data science task.

Warning

If you are going to use this product in production, we recommend to configure it according to the MLflow recommendations.

Deployment instructions
  1. Create an SSH key pair.

  2. Click the button in this card to go to VM creation. The image will be automatically selected under Image/boot disk selection.

  3. Under Network settings, enable a public IP address for the VM (Public IP: Auto for a random address or List if you have a reserved static address).

  4. Under Access, paste the public key from the pair into the SSH key field.

  5. Create the VM.

  6. Add MLflow tracking to your code. See examples in the MLflow documentation.

  7. Run your code.

  8. To access the UI, go to http://<VM ipv4 address>:5000 in your web browser. Credentials are stored in /root/default_passwords.txt file and printed in VM Serial Output.

Billing type
Free
Type
Virtual Machine
Category
Machine Learning & AI
Training
Inference
Publisher
Nebius
Use cases
  • Recording parameters and metrics from experiments, comparing results and exploring the solution space. Storing the outputs as models.
  • Comparing the performance of different models and selecting the best for deployment. Registering the models and tracking performance of their production versions.
  • Deploying ML models in diverse serving environments.
  • Storing, annotating, discovering, and managing models in a central repository.
  • Packaging data science code in formats that allow running it with different parameters on any platform and sharing it with others.
Technical support

Nebius AI does not provide technical support for the product. If you have any issues, please refer to the developer’s information resources.

Product IDs
image_id:
arl2a4l2ctk4ms13h59e
family_id:
mlflow
Product composition
SoftwareVersion
Ubuntu22.04
Terms
By using this product you agree to the Nebius AI Marketplace Terms of ServiceApache 2.0 and the terms and conditions of the following software: Ubuntu
Billing type
Free
Type
Virtual Machine
Category
Machine Learning & AI
Training
Inference
Publisher
Nebius