Learning to represent continuous variables in heterogeneous neural networks

Ran Darshan, Alexander Rivkind

Research output: Contribution to journalArticlepeer-review

Abstract

Animals must monitor continuous variables such as position or head direction. Manifold attractor networks—which enable a continuum of persistent neuronal states—provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.

Original languageEnglish
Article number110612
JournalCell Reports
Volume39
Issue number1
DOIs
StatePublished - 5 Apr 2022
Externally publishedYes

Keywords

  • CP: Neuroscience
  • continuous attractor
  • control
  • low-dimensional dynamics
  • low-rank perturbation
  • manifold attractor
  • neural computation
  • recurrent neural networks
  • training
  • working memory

All Science Journal Classification (ASJC) codes

  • General Biochemistry,Genetics and Molecular Biology

Fingerprint

Dive into the research topics of 'Learning to represent continuous variables in heterogeneous neural networks'. Together they form a unique fingerprint.

Cite this