TY - GEN
T1 - Anchoring and agreement in syntactic annotations
AU - Berzak, Yevgeni
AU - Huang, Yan
AU - Barbu, Andrei
AU - Korhonen, Anna
AU - Katz, Boris
N1 - Publisher Copyright: © 2016 Association for Computational Linguistics
PY - 2016
Y1 - 2016
N2 - We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well known cognitive bias in human decision making, where judgments are drawn towards preexisting values. We study the influence of anchoring on a standard approach to creation of syntactic resources where syntactic annotations are obtained via human editing of tagger and parser output. Our experiments demonstrate a clear anchoring effect and reveal unwanted consequences, including overestimation of parsing performance and lower quality of annotations in comparison with human-based annotations. Using sentences from the Penn Treebank WSJ, we also report systematically obtained inter-annotator agreement estimates for English dependency parsing. Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire. We discuss the impact of our findings on strategies for future annotation efforts and parser evaluations.
AB - We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well known cognitive bias in human decision making, where judgments are drawn towards preexisting values. We study the influence of anchoring on a standard approach to creation of syntactic resources where syntactic annotations are obtained via human editing of tagger and parser output. Our experiments demonstrate a clear anchoring effect and reveal unwanted consequences, including overestimation of parsing performance and lower quality of annotations in comparison with human-based annotations. Using sentences from the Penn Treebank WSJ, we also report systematically obtained inter-annotator agreement estimates for English dependency parsing. Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire. We discuss the impact of our findings on strategies for future annotation efforts and parser evaluations.
UR - http://www.scopus.com/inward/record.url?scp=85072834989&partnerID=8YFLogxK
U2 - 10.18653/v1/d16-1239
DO - 10.18653/v1/d16-1239
M3 - منشور من مؤتمر
T3 - EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
SP - 2215
EP - 2224
BT - EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
T2 - 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016
Y2 - 1 November 2016 through 5 November 2016
ER -