Abstract
Large-scale pretrained language models are the major driving force behind recent improvements in perfromance on the Winograd Schema Challenge, a widely employed test of commonsense reasoning ability. We show, however, with a new diagnostic dataset, that these models are sensitive to linguistic perturbations of the Winograd examples that minimally affect human understanding. Our results highlight interesting differences between humans and language models: language models are more sensitive to number or gender alternations and synonym replacements than humans, and humans are more stable and consistent in their predictions, maintain a much higher absolute performance, and perform better on non-associative instances than associative ones.
| Original language | Undefined/Unknown |
|---|---|
| Title of host publication | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |
| Place of Publication | Online |
| Pages | 7590-7604 |
| Number of pages | 15 |
| DOIs | |
| State | Published - 1 Jul 2020 |