Abstract
In the context of reinforcement learning we introduce the concept of criticality of a state, which indicates the extent to which the choice of action in that particular state influences the expected return. That is, a state in which the choice of action is more likely to influence the final outcome is considered as more critical than a state in which it is less likely to influence the final outcome. We formulate a criticality-based varying step number algorithm (CVS) - a flexible step number algorithm that utilizes the criticality function provided by a human, or learned directly from the environment. We test it in three different domains including the Atari Pong environment, Road-Tree environment, and Shooter environment. We demonstrate that CVS is able to outperform popular learning algorithms such as Deep Q-Learning and Monte Carlo.
Original language | English |
---|---|
Article number | 2150019 |
Journal | International Journal on Artificial Intelligence Tools |
Volume | 30 |
Issue number | 4 |
DOIs | |
State | Published - Jun 2021 |
Keywords
- Human-aided reinforcement learning
- deep reinforcement learning
All Science Journal Classification (ASJC) codes
- Artificial Intelligence