On making reading comprehension more comprehensive

Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Machine reading comprehension, the task of evaluating a machine's ability to comprehend a passage of text, has seen a surge in popularity in recent years. There are many datasets that are targeted at reading comprehension, and many systems that perform as well as humans on some of these datasets. Despite all of this interest, there is no work that systematically defines what reading comprehension is. In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system's comprehension of a passage, moving beyond questions that only probe local predicate-argument structures. The main pitfall of this approach is that questions can easily have surface cues or other biases that allow a model to shortcut the intended reasoning process. We discuss ways proposed in current literature to mitigate these shortcuts, and we conclude with recommendations for future dataset collection efforts.

Original languageEnglish
Title of host publicationMRQA@EMNLP 2019 - Proceedings of the 2nd Workshop on Machine Reading for Question Answering
PublisherAssociation for Computational Linguistics (ACL)
Pages105-112
Number of pages8
ISBN (Electronic)9781950737819
StatePublished - 2019
Event2nd Workshop on Machine Reading for Question Answering, MRQA@EMNLP 2019 - Hong Kong, China
Duration: 4 Nov 2019 → …

Publication series

NameMRQA@EMNLP 2019 - Proceedings of the 2nd Workshop on Machine Reading for Question Answering

Conference

Conference2nd Workshop on Machine Reading for Question Answering, MRQA@EMNLP 2019
Country/TerritoryChina
CityHong Kong
Period4/11/19 → …

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Graphics and Computer-Aided Design

Cite this