Collective Learning by Ensembles of Altruistic Diversifying Neural Networks

Benjamin Brazowski, Elad Schneidman

Research output: Contribution to journalArticle

Abstract

Combining the predictions of collections of neural networks often outperforms the best single network. Such ensembles are typically trained independently, and their superior `wisdom of the crowd' originates from the differences between networks. Collective foraging and decision making in socially interacting animal groups is often improved or even optimal thanks to local information sharing between conspecifics. We therefore present a model for co-learning by ensembles of interacting neural networks that aim to maximize their own performance but also their functional relations to other networks. We show that ensembles of interacting networks outperform independent ones, and that optimal ensemble performance is reached when the coupling between networks increases diversity and degrades the performance of individual networks. Thus, even without a global goal for the ensemble, optimal collective behavior emerges from local interactions between networks. We show the scaling of optimal coupling strength with ensemble size, and that networks in these ensembles specialize functionally and become more `confident' in their assessments. Moreover, optimal co-learning networks differ structurally, relying on sparser activity, a wider range of synaptic weights, and higher firing rates - compared to independently trained networks. Finally, we explore interactions-based co-learning as a framework for expanding and boosting ensembles.
Original languageEnglish
JournalarXiv
StateSubmitted - 20 Jun 2020

Fingerprint

Dive into the research topics of 'Collective Learning by Ensembles of Altruistic Diversifying Neural Networks'. Together they form a unique fingerprint.

Cite this