TY - GEN
T1 - Creating a large benchmark for open information extraction
AU - Stanovsky, Gabriel
AU - Dagan, Ido
N1 - Funding Information: We would like to thank Mausam for fruitful discussions, and the anonymous reviewers for their helpful comments. This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS), the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Funding Information: This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS), the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Publisher Copyright: © 2016 Association for Computational Linguistics
PY - 2016
Y1 - 2016
N2 - Open information extraction (Open IE) was presented as an unrestricted variant of traditional information extraction. It has been gaining substantial attention, manifested by a large number of automatic Open IE extractors and downstream applications. In spite of this broad attention, the Open IE task definition has been lacking - there are no formal guidelines and no large scale gold standard annotation. Subsequently, the various implementations of Open IE resorted to small scale post-hoc evaluations, inhibiting an objective and reproducible cross-system comparison. In this work, we develop a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation,1 and use it to automatically compare the most prominent Open IE systems.
AB - Open information extraction (Open IE) was presented as an unrestricted variant of traditional information extraction. It has been gaining substantial attention, manifested by a large number of automatic Open IE extractors and downstream applications. In spite of this broad attention, the Open IE task definition has been lacking - there are no formal guidelines and no large scale gold standard annotation. Subsequently, the various implementations of Open IE resorted to small scale post-hoc evaluations, inhibiting an objective and reproducible cross-system comparison. In this work, we develop a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation,1 and use it to automatically compare the most prominent Open IE systems.
UR - http://www.scopus.com/inward/record.url?scp=85046902690&partnerID=8YFLogxK
U2 - https://doi.org/10.18653/v1/d16-1252
DO - https://doi.org/10.18653/v1/d16-1252
M3 - Conference contribution
T3 - EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
SP - 2300
EP - 2305
BT - EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016
Y2 - 1 November 2016 through 5 November 2016
ER -