On adversarial removal of hypothesis-only bias in natural language inference

Yonatan Belinkov, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, Alexander Rush

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Popular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore sensitive biases and spurious correlations in data. We evaluate whether adversarial learning can be used in NLI to encourage models to learn representations free of hypothesis-only biases. Our analyses indicate that the representations learned via adversarial learning may be less biased, with only small drops in NLI accuracy.

Original languageEnglish
Title of host publication*SEM@NAACL-HLT 2019 - 8th Joint Conference on Lexical and Computational Semantics
Pages256-262
Number of pages7
ISBN (Electronic)9781948087933
StatePublished - 2019
Externally publishedYes
Event8th Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019 - Minneapolis, United States
Duration: 6 Jun 20197 Jun 2019

Publication series

Name*SEM@NAACL-HLT 2019 - 8th Joint Conference on Lexical and Computational Semantics

Conference

Conference8th Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019
Country/TerritoryUnited States
CityMinneapolis
Period6/06/197/06/19

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'On adversarial removal of hypothesis-only bias in natural language inference'. Together they form a unique fingerprint.

Cite this