Weights & Biases Launch agent

Updated June 12, 2024


The launch agent is a free application which connects to WandB API of Weights & Biases. Weights & Biases is a paid product, requiring license purchase. Please contact Weights & Biases directly to buy a license to be able to use this product.

Weights & Biases is the AI developer platform supporting end-to-end MLOps and LLMops workflows, used by over 30 foundation model builders and 1,000 companies to productionize machine learning at scale. W&B license unlocks a toolkit to build models faster by tracking experiments, iterating on datasets, evaluating model performance, reproducing models, and managing ML workflows. It is compatible with any framework, environment, or workflow.

A few lines of code allow teams to save artifacts for further debugging, comparison, and reproduction of models.

The launch agent application connects to the specified WandB queue, runs jobs and sweeps from it while reporting progress and results back to W&B. It is most suitable if you are using W&B but not willing to dive into DevOps.

We recommend to use this product for testing purposes, since it is required to:

  • Have an account and repository on Docker Hub

  • Use open git repositories

You can deploy Weights & Biases launch agent in your Nebius AI Managed Service for Kubernetes clusters using this Marketplace product.

Deployment instructions

Before installing this product:

  1. Create a Weights & Biases account if you have not done it yet.

  2. In the Weights & Biases Launch App:

    1. Create a queue with the following parameters:

      • Resource: Kubernetes

      • Configuration:

              restartPolicy: Never
              activeDeadlineSeconds: 3600
          backoffLimit: 1
          ttlSecondsAfterFinished: 60
    2. Create an API key for a service account (recommended) or your account.

  3. In Nebius AI, create a container registry that will store Docker images for your Weights & Biases jobs.

  4. Create an authorized key for the k8s-nodegroups-sa service account and download it as a JSON file. The images in the container registry will be managed by this service account.

  5. Create a Kubernetes cluster and a node group in it.

  6. Install kubectl and configure it to work with the created cluster.

To install the product:

  1. Click the button in this card to go to the cluster selection form.

  2. Select your cluster and click Continue.

  3. Configure the application:

    • Namespace: Select a namespace or create one.


      The namespace must be named wandb.

    • Application name: Enter an application name.

    • W&B API key: Paste the API key created in the Weights & Biases Launch App.

    • W&B queue name: Enter the name of the queue created in the Weights & Biases Launch App.

    • Authorized key JSON: Paste the contents of the JSON file that you downloaded when creating an authorized key.

    • Container registry URL: Enter the URL of the created container registry in the following format:<container_registry_ID>

    • Maximum amount of simultaneous jobs: Enter the maximum amount of jobs that the agent can execute in parallel.

    • Builder job CPU limit: Enter the maximum number of vCPUs that will be allocated to the container builder.

    • Builder job memory limit: Enter the maximum RAM size (in GB) that will be allocated to the container builder.

    • Size of persistent volume: Enter the size of the persistent volume (in GB) that will be used to store temporary artifacts. At least 100 GB is recommended.

  4. Click Install.

  5. Wait for the application to change its status to Deployed.

  6. To check that the Weights & Biases Launch agent is working, run a job on it:

    1. Create a job. You can fork the Weights & Biases repository on GitHub and use an example from it to create a job with Git.
    2. Add the job to the queue or create a sweep from it. A sweep is a hyperparameter tuning job.
    3. View the information about the job or the sweep to make sure that it is running properly.
Billing type
Kubernetes® Application
Use cases
  • Creating visualizations about your datasets and your models, visualizing tracking and monitoring insights with ability to share them interactively with collaborators.

  • Efficient hyperparameters optimization, debugging model performance in real time.

  • Launching ML experiments and jobs on Nebius AI infrastructure, using NVIDIA GPUs, understanding & visualizing GPU usage.

  • Easy deployment of large scale, compute-intensive workloads.

Technical support

Nebius AI does not provide technical support for the product. If you have any issues, please refer to the developer’s information resources.

Product composition
Helm chartVersion
Docker imageVersion
By using this product you agree to the Nebius AI Marketplace Terms of Service and the terms and conditions of the following software: Master Service AgreementWeights & Biases Legal Documentation
Billing type
Kubernetes® Application