Advice and answers from the LightTag Team

Go to LightTag

Analytics Overview

You can't improve what you don't measure. With LightTag you can measure everything!

Written by Tal Perry. Updated over a week ago

Any project needs measurements to know what's going on. LightTag provides the main analytics you'll need to track the progress and quality of your annotation project.

Like in machine learning, LightTag's analytics has "supervised" and "unsupervised" analytics.

Supervised and Unsupervised Metrics

Unsupervised analytics are ones that don't require any additional supervision from you, such as inter annotator agreement or productivity reports.

Supervised analytics are those which are derived from the review process. Once you review the annotations created, LightTag will can give you quality metrics for each annotator (and model).

Human And Model Metrics

When you upload pre-annotations from your own models, LightTag can give you metrics about them as well.

For example, you can get "Inter Model Agreement" to see how different two or more models are from each other in their output.

You can also see model's performance once you've reviewed the data, making LightTag a great place to track model performance and the effects of additional labeled data.

Filtering

Each screen in the analytics section (except productivity) is filterable. This lets you control what metrics your looking at and limit it the annotation sources to a specific dataset, schema or job.