Recurrent neural networks as versatile tools of neuroscience research

Research output: Contribution to journalReview articlepeer-review

Abstract

Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input–output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.

Original languageEnglish
Pages (from-to)1-6
Number of pages6
JournalCurrent Opinion in Neurobiology
Volume46
DOIs
StatePublished - Oct 2017

All Science Journal Classification (ASJC) codes

  • General Neuroscience

Fingerprint

Dive into the research topics of 'Recurrent neural networks as versatile tools of neuroscience research'. Together they form a unique fingerprint.

Cite this