back
Back to Glossary

What is Ground truth?

Ground truth is the best available reference label for a dataset—what you treat as “correct” when training or evaluating a model. In machine learning, ground truth might be a human-annotated label (“cat” vs “dog”), a polygon marking a building footprint, a transcript of an audio clip, or a verified value in a structured record.

Ground truth is essential because it is what your model is judged against. Accuracy, precision, recall, and other metrics only make sense when the “correct answer” is defined and consistently applied.

Ground truth in real projects

In practice, ground truth is rarely perfect:

  1. Ambiguity: Some cases are subjective (sentiment, intent, visual boundaries, medical interpretation).
  2. Noise: Annotators can make mistakes or interpret guidelines differently.
  3. Evolving definitions: Label taxonomies change as product requirements change.

That’s why mature teams separate, training labels (good enough to learn patterns at scale) from the gold-standard labels (higher-confidence labels used for audits and evaluation), also called as gold sets.

How teams improve ground truth quality

  1. Clear labeling guidelines and examples
  2. Multiple annotators and adjudication for hard cases
  3. Spot checks, audits, and “gold tasks” in workflows
  4. Measuring disagreement rates to find unclear classes