Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning

Matthias Gerstgrasser, Tom Danino, Sarah Keren

Research output: Contribution to journalConference articlepeer-review

Abstract

We present a novel multi-agent RL approach, Selective Multi-Agent Prioritized Experience Relay, in which agents share with other agents a limited number of transitions they observe during training.The intuition behind this is that even a small number of relevant experiences from other agents could help each agent learn.Unlike many other multi-agent RL algorithms, this approach allows for largely decentralized training, requiring only a limited communication channel between agents.We show that our approach outperforms baseline no-sharing decentralized training and state-of-the art multi-agent RL algorithms.Further, sharing only a small number of highly relevant experiences outperforms sharing all experiences between agents, and the performance uplift from selective experience sharing is robust across a range of hyperparameters and DQN variants.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning'. Together they form a unique fingerprint.

Cite this