Performance of Human Annotators in Object Detection and Segmentation of Remotely Sensed Data

Roni Blushtein-Livnon, Tal Svoray, Michael Dorman

Research output: Contribution to journalArticlepeer-review

Abstract

This study introduces a laboratory experiment designed to assess the influence of annotation strategies, levels of class imbalance, and prior experience, on the performance of human annotators. The experiment focuses on labeling aerial imagery, using ArcGIS Pro, to detect and segment small-scale photovoltaics (PVs), selected as a case study for rectangular objects. The experiment is conducted using images with a pixel size of 0.15 m, involving both expert and nonexpert participants, across different setup strategies and target-background ratio datasets. Our findings indicate that annotators generally perform more effectively in object detection (OD) than in segmentation tasks. A marked tendency to commit more Type II errors (false negatives (FNs), i.e., undetected objects) than Type I errors (false positives (FPs), i.e., falsely detecting objects that do not exist) was observed across all experimental setups and conditions, suggesting a consistent bias in annotation processes. Performance was better in tasks with higher target-background ratios (i.e., more objects per unit area). Prior experience did not significantly impact performance and may, in some cases, even lead to overestimation in segmentation. These results provide evidence that annotators are relatively cautious and tend to identify objects only when they are confident about them, prioritizing underestimation over overestimation. Annotators’ performance is also influenced by object scarcity, showing a decline in areas with extremely imbalanced class datasets and a low ratio of target-to-background. These findings may enhance annotation strategies for remote sensing (RS) research while efficient human annotators are crucial in an era characterized by growing demands for high-quality training data to improve segmentation and detection models.

Original languageAmerican English
Article number4407116
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume63
DOIs
StatePublished - 1 Jan 2025

Keywords

  • Error types
  • IoU
  • expert annotators
  • human annotation
  • intersection over union (IoU)
  • object detection
  • object detection (OD)
  • precision
  • recall
  • remote sensing
  • remote sensing (RS)
  • segmentation

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • General Earth and Planetary Sciences

Fingerprint

Dive into the research topics of 'Performance of Human Annotators in Object Detection and Segmentation of Remotely Sensed Data'. Together they form a unique fingerprint.

Cite this