TY - GEN
T1 - Optimistic policy optimization with bandit feedback
AU - Efroni, Yonathan
AU - Shani, Lior
AU - Rosenberg, Aviv
AU - Mannor, Shie
N1 - Publisher Copyright: © 2020 37th International Conference on Machine Learning, ICML 2020. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic policy optimization algorithm for which we establish ~O( p S2AH4K) regret for stochastic rewards. Furthermore, we prove ~O ( p S2AH4K2=3) regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.
AB - Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. Yet, so far, such methods have been mostly analyzed from an optimization perspective, without addressing the problem of exploration, or by making strong assumptions on the interaction with the environment. In this paper we consider model-based RL in the tabular finite-horizon MDP setting with unknown transitions and bandit feedback. For this setting, we propose an optimistic policy optimization algorithm for which we establish ~O( p S2AH4K) regret for stochastic rewards. Furthermore, we prove ~O ( p S2AH4K2=3) regret for adversarial rewards. Interestingly, this result matches previous bounds derived for the bandit feedback case, yet with known transitions. To the best of our knowledge, the two results are the first sub-linear regret bounds obtained for policy optimization algorithms with unknown transitions and bandit feedback.
UR - http://www.scopus.com/inward/record.url?scp=85105274700&partnerID=8YFLogxK
M3 - منشور من مؤتمر
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 8562
EP - 8571
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -