MLflow is a platform for managing workflows and artifacts across the machine learning lifecycle, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. It has built-in integrations with many popular ML libraries (TensorFlow, PyTorch, XGBoost, etc), but can be used with any library, algorithm, or deployment tool. MLflow’s components are:
- MLflow Tracking: An API for logging parameters, code versions, metrics, model environment dependencies, and model artifacts when running your machine learning code.
- MLflow Models: A model packaging format and suite of tools that let you easily deploy a trained model for batch or real-time inference.
- MLflow Model Registry: A centralized model store, set of APIs, and UI focused on the approval, quality assurance, and deployment of an MLflow Model.
- MLflow Projects: A standard format for packaging reusable data science code that can be run with different parameters to train models, visualize data, or perform any other data science task.
Warning
If you are going to use this product in production, we recommend to configure it according to the MLflow recommendations.
-
Click the button in this card to go to VM creation. The image will be automatically selected under Image/boot disk selection.
-
Under Network settings, enable a public IP address for the VM (Public IP: Auto for a random address or List if you have a reserved static address).
-
Under Access, paste the public key from the pair into the SSH key field.
-
Create the VM.
-
Add MLflow tracking to your code. See examples in the MLflow documentation.
-
Run your code.
-
To access the UI, go to
http://<VM ipv4 address>:5000
in your web browser. Credentials are stored in/root/default_passwords.txt
file and printed in VM Serial Output.
- Recording parameters and metrics from experiments, comparing results and exploring the solution space. Storing the outputs as models.
- Comparing the performance of different models and selecting the best for deployment. Registering the models and tracking performance of their production versions.
- Deploying ML models in diverse serving environments.
- Storing, annotating, discovering, and managing models in a central repository.
- Packaging data science code in formats that allow running it with different parameters on any platform and sharing it with others.
Nebius AI does not provide technical support for the product. If you have any issues, please refer to the developer’s information resources.