Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment

Yonatan Belinkov, Tao Lei, Regina Barzilay, Amir Globerson

Research output: Contribution to journalArticlepeer-review

Abstract

Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6% PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7% and 80.8% respectively.
Original languageUndefined/Unknown
Pages (from-to)561-572
Number of pages12
JournalTransactions of the Association for Computational Linguistics
Volume2
Issue number0
StatePublished - 2014

Cite this