Multi-Agent Reinforcement Learning with Multi-Step Generative Models

Orr Krupnik, Igor Mordatch, Aviv Tamar

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider model-based reinforcement learning (MBRL) in 2-agent, high-fidelity continuous control problems – an important domain for robots interacting with other agents in the same workspace. For non-trivial dynamical systems, MBRL typically suffers from accumulating errors. Several recent studies have addressed this problem by learning latent variable models for trajectory segments and optimizing over behavior in the latent space. In this work, we investigate whether this approach can be extended to 2-agent competitive and cooperative settings. The fundamental challenge is how to learn models that capture interactions between agents, yet are disentangled to allow for optimization of each agent behavior separately. We propose such models based on a disentangled variational auto-encoder, and demonstrate our approach on a simulated 2-robot manipulation task, where one robot can either help or distract the other. We show that our approach has better sample efficiency than a strong model-free RL baseline, and can learn both cooperative and adversarial behavior from the same data.

Original languageEnglish
Pages (from-to)776-790
Number of pages15
JournalProceedings of Machine Learning Research
Volume100
StatePublished - 2019
Event3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan
Duration: 30 Oct 20191 Nov 2019

Keywords

  • Multi-agent systems
  • generative models
  • reinforcement learning

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Multi-Agent Reinforcement Learning with Multi-Step Generative Models'. Together they form a unique fingerprint.

Cite this