Do supervised distributional methods really learn lexical inference relations?

Omer Levy, Steffen Remus, Chris Biemann, Ido Dagan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these state-of-the-art methods, and show that they do not actually learn a relation between two words. Instead, they learn an independent property of a single word in the pair: whether that word is a "prototypical hypernym".

Original languageEnglish
Title of host publicationNAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics
Subtitle of host publicationHuman Language Technologies, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages970-976
Number of pages7
ISBN (Electronic)9781941643495
DOIs
StatePublished - 2015
EventConference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2015 - Denver, United States
Duration: 31 May 20155 Jun 2015

Publication series

NameNAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference

Conference

ConferenceConference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2015
Country/TerritoryUnited States
CityDenver
Period31/05/155/06/15

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Do supervised distributional methods really learn lexical inference relations?'. Together they form a unique fingerprint.

Cite this