TY - GEN
T1 - Distributional multivariate policy evaluation and exploration with the Bellman GaN - Supplementary material
AU - Freirich, Dror
AU - Shimkin, Tzahi
AU - Meir, Ron
AU - Tamar, Aviv
N1 - Publisher Copyright: Copyright 2019 by the author(s).
PY - 2019
Y1 - 2019
N2 - The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of high-dimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, allowing us to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.
AB - The recently proposed distributional approach to reinforcement learning (DiRL) is centered on learning the distribution of the reward-to-go, often referred to as the value distribution. In this work, we show that the distributional Bellman equation, which drives DiRL methods, is equivalent to a generative adversarial network (GAN) model. In this formulation, DiRL can be seen as learning a deep generative model of the value distribution, driven by the discrepancy between the distribution of the current value, and the distribution of the sum of current reward and next value. We use this insight to propose a GAN-based approach to DiRL, which leverages the strengths of GANs in learning distributions of high-dimensional data. In particular, we show that our GAN approach can be used for DiRL with multivariate rewards, an important setting which cannot be tackled with prior methods. The multivariate setting also allows us to unify learning the distribution of values and state transitions, allowing us to devise a novel exploration method that is driven by the discrepancy in estimating both values and states.
UR - http://www.scopus.com/inward/record.url?scp=85079465533&partnerID=8YFLogxK
M3 - منشور من مؤتمر
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 3504
EP - 3508
BT - 36th International Conference on Machine Learning, ICML 2019
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -