TY - JOUR
T1 - Quality of internal representation shapes learning performance in feedback neural networks
AU - Susman, Lee
AU - Mastrogiuseppe, Francesca
AU - Brenner, Naama
AU - Barak, Omri
N1 - Publisher Copyright: © 2021 authors. Published by the American Physical Society.
PY - 2021/2/23
Y1 - 2021/2/23
N2 - A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study nonlinear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, desynchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behavior of performance in nonlinear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained nonlinear networks to explain behaviors which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.
AB - A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study nonlinear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, desynchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behavior of performance in nonlinear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained nonlinear networks to explain behaviors which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.
UR - http://www.scopus.com/inward/record.url?scp=85107038217&partnerID=8YFLogxK
U2 - https://doi.org/10.1103/PhysRevResearch.3.013176
DO - https://doi.org/10.1103/PhysRevResearch.3.013176
M3 - مقالة
SN - 2643-1564
VL - 3
JO - PHYSICAL REVIEW RESEARCH
JF - PHYSICAL REVIEW RESEARCH
IS - 1
M1 - 013176
ER -