Local Dynamics in Trained Recurrent Neural Networks

Alexander Rivkind, Omri Barak

Research output: Contribution to journalArticlepeer-review

Abstract

Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

Original languageEnglish
Article number258101
JournalPhysical Review Letters
Volume118
Issue number25
DOIs
StatePublished - 23 Jun 2017

All Science Journal Classification (ASJC) codes

  • General Physics and Astronomy

Cite this