Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks

David Sussillo, Omri Barak

Research output: Contribution to journalArticlepeer-review

Abstract

Recurrent neural networks (RNNs) are useful tools for learning nonlinear relationships between time-varying inputs and outputswith complex temporal dependencies. Recently developed algorithms have been successful at training RNNs to perform a wide variety of tasks, but the resulting networks have been treated as black boxes: their mechanism of operation remains unknown. Here we explore the hypothesis that fixed points, both stable and unstable, and the linearized dynamics around them, can reveal crucial aspects of how RNNs implement their computations. Further, we explore the utility of linearization in areas of phase space that are not true fixed points but merely points of very slow movement. We present a simple optimization technique that is applied to trained RNNs to find the fixed and slow points of their dynamics. Linearization around these slow regions can be used to explore, or reverse-engineer, the behavior of the RNN. We describe the technique, illustrate it using simple examples, and finally showcase it on three highdimensional RNN examples: a 3-bit flip-flop device, an input-dependent sine wave generator, and a two-point moving average. In all cases, the mechanisms of trained networks could be inferred from the sets of fixed and slow points and the linearized dynamics around them.

Original languageEnglish
Pages (from-to)626-649
Number of pages24
JournalNeural Computation
Volume25
Issue number3
DOIs
StatePublished - 2013
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Arts and Humanities (miscellaneous)
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks'. Together they form a unique fingerprint.

Cite this