Split and rephrase: Better evaluation and a stronger baseline

Roee Aharoni, Yoav Goldberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Splitting and rephrasing a complex sentence into several shorter sentences that convey the same meaning is a challenging problem in NLP. We show that while vanilla seq2seq models can reach high scores on the proposed benchmark (Narayan et al., 2017), they suffer from memorization of the training set which contains more than 89% of the unique simple sentences from the validation and test sets. To aid this, we present a new train-development-test data split and neural models augmented with a copy-mechanism, outperforming the best reported baseline by 8.68 BLEU and fostering further progress on the task.

Original languageEnglish
Title of host publicationACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages719-724
Number of pages6
ISBN (Electronic)9781948087346
StatePublished - 2018
Event56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 - Melbourne, Australia
Duration: 15 Jul 201820 Jul 2018

Publication series

NameACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
Volume2

Conference

Conference56th Annual Meeting of the Association for Computational Linguistics, ACL 2018
Country/TerritoryAustralia
CityMelbourne
Period15/07/1820/07/18

All Science Journal Classification (ASJC) codes

  • Software
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Split and rephrase: Better evaluation and a stronger baseline'. Together they form a unique fingerprint.

Cite this