Distributional policy optimization: An alternative approach for continuous control

Chen Tessler, Guy Tennenholtz, Shie Mannor

Research output: Contribution to journalConference articlepeer-review

Abstract

We identify a fundamental problem in policy gradient-based methods in continuous control. As policy gradient methods require the agent's underlying probability distribution, they limit policy representation to parametric distribution classes. We show that optimizing over such sets results in local movement in the action space and thus convergence to sub-optimal solutions. We suggest a novel distributional framework, able to represent arbitrary distribution functions over the continuous action space. Using this framework, we construct a generative scheme, trained using an off-policy actor-critic paradigm, which we call the Generative Actor Critic (GAC). Compared to policy gradient methods, GAC does not require knowledge of the underlying probability distribution, thereby overcoming these limitations. Empirical evaluation shows that our approach is comparable and often surpasses current state-of-the-art baselines in continuous domains.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Distributional policy optimization: An alternative approach for continuous control'. Together they form a unique fingerprint.

Cite this