ON THE WEAKNESSES OF REINFORCEMENT LEARNING FOR NEURAL MACHINE TRANSLATION

Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend

Research output: Contribution to conferencePaperpeer-review

Abstract

Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.

Original languageEnglish
StatePublished - 2020
Event8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia
Duration: 30 Apr 2020 → …

Conference

Conference8th International Conference on Learning Representations, ICLR 2020
Country/TerritoryEthiopia
CityAddis Ababa
Period30/04/20 → …

All Science Journal Classification (ASJC) codes

  • Education
  • Linguistics and Language
  • Language and Linguistics
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'ON THE WEAKNESSES OF REINFORCEMENT LEARNING FOR NEURAL MACHINE TRANSLATION'. Together they form a unique fingerprint.

Cite this