Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies

Research output: Contribution to journalConference articlepeer-review

Abstract

We consider a setting of hierarchical reinforcement learning, in which the reward is a sum of components. For each component, we are given a policy that maximizes it, and our goal is to assemble a policy from the individual policies that maximize the sum of the components. We provide theoretical guarantees for assembling such policies in deterministic MDPs with collectible rewards. Our approach builds on formulating this problem as a traveling salesman problem with a discounted reward. We focus on local solutions, i.e., policies that only use information from the current state; thus, they are easy to implement and do not require substantial computational resources. We propose three local stochastic policies and prove that they guarantee better performance than any deterministic local policy in the worst case; experimental results suggest that they also perform better on average.

Original languageEnglish
Pages (from-to)906-934
Number of pages29
JournalProceedings of Machine Learning Research
Volume117
StatePublished - 2020
Externally publishedYes
Event31st International Conference on Algorithmic Learning Theory, ALT 2020 - San Diego, United States
Duration: 8 Feb 202011 Feb 2020
https://proceedings.mlr.press/v117

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Planning in Hierarchical Reinforcement Learning: Guarantees for Using Local Policies'. Together they form a unique fingerprint.

Cite this