TY - GEN
T1 - QANom
T2 - 28th International Conference on Computational Linguistics, COLING 2020
AU - Klein, Ayal
AU - Mamou, Jonathan
AU - Pyatkin, Valentina
AU - Weiss, Daniela Brook
AU - He, Hangfeng
AU - Roth, Dan
AU - Zettlemoyer, Luke
AU - Dagan, Ido
N1 - Funding Information: This work was supported in part by grants from Intel Labs, Facebook, the Israel Science Foundation grant 1951/17 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Publisher Copyright: © 2020 COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference. All rights reserved.
PY - 2020/12/1
Y1 - 2020/12/1
N2 - We propose a new semantic scheme for capturing predicate-argument relations for nominalizations, termed QANom. This scheme extends the QA-SRL formalism (He et al., 2015), modeling the relations between nominalizations and their arguments via natural language question-answer pairs. We construct the first QANom dataset using controlled crowdsourcing, analyze its quality and compare it to expertly annotated nominal-SRL annotations, as well as to other QA-driven annotations. In addition, we train a baseline QANom parser for identifying nominalizations and labeling their arguments with question-answer pairs. Finally, we demonstrate the extrinsic utility of our annotations for downstream tasks using both indirect supervision and zero-shot settings.
AB - We propose a new semantic scheme for capturing predicate-argument relations for nominalizations, termed QANom. This scheme extends the QA-SRL formalism (He et al., 2015), modeling the relations between nominalizations and their arguments via natural language question-answer pairs. We construct the first QANom dataset using controlled crowdsourcing, analyze its quality and compare it to expertly annotated nominal-SRL annotations, as well as to other QA-driven annotations. In addition, we train a baseline QANom parser for identifying nominalizations and labeling their arguments with question-answer pairs. Finally, we demonstrate the extrinsic utility of our annotations for downstream tasks using both indirect supervision and zero-shot settings.
UR - http://www.scopus.com/inward/record.url?scp=85115674964&partnerID=8YFLogxK
U2 - https://doi.org/10.18653/v1/2020.coling-main.274
DO - https://doi.org/10.18653/v1/2020.coling-main.274
M3 - Conference contribution
T3 - COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference
SP - 3069
EP - 3083
BT - Proceedings of the 28th International Conference on Computational Linguistics
A2 - Zong, Chengqing
A2 - Bel, Nuria
A2 - Scott, Donia
PB - Association for Computational Linguistics (ACL)
Y2 - 8 December 2020 through 13 December 2020
ER -