TY - JOUR
T1 - Local Dynamics in Trained Recurrent Neural Networks
AU - Rivkind, Alexander
AU - Barak, Omri
N1 - Publisher Copyright: © 2017 American Physical Society.
PY - 2017/6/23
Y1 - 2017/6/23
N2 - Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.
AB - Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.
UR - http://www.scopus.com/inward/record.url?scp=85021320728&partnerID=8YFLogxK
U2 - 10.1103/PhysRevLett.118.258101
DO - 10.1103/PhysRevLett.118.258101
M3 - مقالة
SN - 0031-9007
VL - 118
JO - Physical Review Letters
JF - Physical Review Letters
IS - 25
M1 - 258101
ER -