TY - GEN
T1 - Planning and learning with stochastic action sets
AU - Boutilier, Craig
AU - Cohen, Alon
AU - Hassidim, Avinatan
AU - Mansour, Yishay
AU - Meshi, Ofer
AU - Mladenov, Martin
AU - Schuurmans, Dale
N1 - Publisher Copyright: © 2018 International Joint Conferences on Artificial Intelligence.All right reserved.
PY - 2018
Y1 - 2018
N2 - In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities, and a sampling-based model for unknown distributions. We develop polynomial-time value and policy iteration methods for both cases, and provide a polynomial-time linear programming solution for the first case.
AB - In many practical uses of reinforcement learning (RL) the set of actions available at a given state is a random variable, with realizations governed by an exogenous stochastic process. Somewhat surprisingly, the foundations for such sequential decision processes have been unaddressed. In this work, we formalize and investigate MDPs with stochastic action sets (SAS-MDPs) to provide these foundations. We show that optimal policies and value functions in this model have a structure that admits a compact representation. From an RL perspective, we show that Q-learning with sampled action sets is sound. In model-based settings, we consider two important special cases: when individual actions are available with independent probabilities, and a sampling-based model for unknown distributions. We develop polynomial-time value and policy iteration methods for both cases, and provide a polynomial-time linear programming solution for the first case.
UR - http://www.scopus.com/inward/record.url?scp=85055710589&partnerID=8YFLogxK
U2 - https://doi.org/10.24963/ijcai.2018/650
DO - https://doi.org/10.24963/ijcai.2018/650
M3 - منشور من مؤتمر
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 4674
EP - 4682
BT - Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
A2 - Lang, Jerome
T2 - 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
Y2 - 13 July 2018 through 19 July 2018
ER -