Once you review data, we can give you supervised metrics. Specifically, LightTag can tell you your annotators precision and recall.
A Quick Primer On Precision and Recall
Precision and Recall are commonly used measurements in geek circles to describe how accurate things are
If precision and recall don't mean much to you, here's the tl;dr:
Precision is what percentage of the annotations I made were correct.
Recall is what percentage of the known correct annotations I agreed with.
So if we know that "Dog" and "Cat" are correct, and I said "Dog", "Cat", "Hamster" my recall is 100% and my precision is 66%
Wait There is More (F1)
We show another number with a funny name, the F1-Score. It sounds cool, right ?
The F1-Score is basically the average of precision and recall. If precision and recall tell you about the kind of mistakes someone makes, the F1-Score tells you if they make mistakes. It's actually the oposite, a high F1 Score means they are doing really good, and a low one means they could use some one on one time with you.
The Super Power
If you click on one of the rows, we'll show you all of the annotations a person or model made and you can filter them by error type