Training with exploration improves a greedy stack LSTM parser

Miguel Ballesteros, Yoav Goldberg, Chris Dyer, Noah A. Smith

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We adapt the greedy stack LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles (Goldberg and Nivre, 2013) instead of assuming an error-free action history. This form of training, which accounts for model predictions at training time, improves parsing accuracies. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural network dependency parser.

Original languageEnglish
Title of host publicationEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages2005-2010
Number of pages6
ISBN (Electronic)9781945626258
DOIs
StatePublished - 2016
Event2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 - Austin, United States
Duration: 1 Nov 20165 Nov 2016

Publication series

NameEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings

Conference

Conference2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016
Country/TerritoryUnited States
CityAustin
Period1/11/165/11/16

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Information Systems
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Training with exploration improves a greedy stack LSTM parser'. Together they form a unique fingerprint.

Cite this