Label Studio is a flexible, open-source data labeling platform by HumanSignal that allows to label and annotate your text, image and audio data objects. With Label Studio, you can set up machine learning frameworks for:
- Pre-labeling by letting models predict labels and then have data annotators perform further manual refinements.
- Auto-labeling by letting models create automatic annotations.
- Online learning by simultaneously updating your model while new annotations are created, letting you retrain your model on-the-fly.
- Active learning by selecting example tasks that the model cannot confidently label and pass them on to annotators for manual labeling.
Component parts of Label Studio are available as modular extensible packages:
- Label Studio Backend: The labeling platform backend.
- Label Studio Frontend: A user interface for labeling data, distributed as an NPM package that you can embed into your applications.
- Data Manager: A data exploration tool to manage data and tasks for labeling.
- Machine Learning Backends: Configs and boilerplates for the backend to predict data labels at various parts of the labeling process.
Click the button in this card to go to VM creation. The image will be automatically selected under Image/boot disk selection.
Under Network settings, enable a public IP address for the VM (Public IP:
Autofor a random address or
Listif you have a reserved static address).
Under Access, paste the public key from the pair into the SSH key field.
Create the VM.
To access the Label Studio UI, go to
http://<public_IP_address_of_VM>.and use the following credentials:
- Password: The ID of your VM. See the how-to guide on getting information about a VM.
- Performing essential data labeling tasks with different types of data, like labeling entire datasets, creating ground truth labels etc.
- Supplying predictions for labels (pre-labels), or perform continuous active learning for machine learning models.
- Using feedback from application users to label data from scratch or adjust predictions.
- Connecting with ML frameworks for auto labeling, continual learning, active learning, and creating prediction services.
- Comparing and verifying predictions from different model architectures or versions, and monitoring prediction errors.
- Setting up mixed labeling pipelines to allow manual adjustments from human annotators.
Nebius AI does not provide technical support for the product. If you have any issues, please refer to the developer’s information resources.