On the practical computational power of finite precision RNNs for language recognition

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

While Recurrent Neural Networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision whose computation time is linear in the input length. Under these limitations, we show that different RNN variants have different computational power. In particular, we show that the LSTM and the Elman-RNN with ReLU activation are strictly stronger than the RNN with a squashing activation and the GRU. This is achieved because LSTMs and ReLU-RNNs can easily implement counting behavior. We show empirically that the LSTM does indeed learn to effectively use the counting mechanism.

Original languageAmerican English
Title of host publicationACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages740-745
Number of pages6
ISBN (Electronic)9781948087346
DOIs
StatePublished - 1 Jan 2018
Event56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 - Melbourne, Australia
Duration: 15 Jul 201820 Jul 2018

Publication series

NameACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
Volume2

Conference

Conference56th Annual Meeting of the Association for Computational Linguistics, ACL 2018
Country/TerritoryAustralia
CityMelbourne
Period15/07/1820/07/18

All Science Journal Classification (ASJC) codes

  • Software
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'On the practical computational power of finite precision RNNs for language recognition'. Together they form a unique fingerprint.

Cite this