TY - JOUR
T1 - Leveraging human knowledge in tabular reinforcement learning: A study of human subjects
AU - Rosenfeld, A.
AU - Cohen, M.
AU - Taylor, M.
AU - Kraus, S.
PY - 2018/9/17
Y1 - 2018/9/17
N2 - Reinforcement learning (RL) can be extremely effective in solving complex, real-world problems. However, injecting human knowledge into an RL agent may require extensive effort and expertise on the human designer's part. To date, human factors are generally not considered in the development and evaluation of possible RL approaches. In this article, we set out to investigate how different methods for injecting human knowledge are applied, in practice, by human designers of varying levels of knowledge and skill. We perform the first empirical evaluation of several methods, including a newly proposed method named State Action Similarity Solutions (SASS) which is based on the notion of similarities in the agent's state–action space. Through this human study, consisting of 51 human participants, we shed new light on the human factors that play a key role in RL. We find that the classical reward shaping technique seems to be the most natural method for most designers, both expert and non-expert, to speed up RL. However, we further find that our proposed method SASS can be effectively and efficiently combined with reward shaping, and provides a beneficial alternative to using only a single-speedup method with minimal human designer effort overhead.
AB - Reinforcement learning (RL) can be extremely effective in solving complex, real-world problems. However, injecting human knowledge into an RL agent may require extensive effort and expertise on the human designer's part. To date, human factors are generally not considered in the development and evaluation of possible RL approaches. In this article, we set out to investigate how different methods for injecting human knowledge are applied, in practice, by human designers of varying levels of knowledge and skill. We perform the first empirical evaluation of several methods, including a newly proposed method named State Action Similarity Solutions (SASS) which is based on the notion of similarities in the agent's state–action space. Through this human study, consisting of 51 human participants, we shed new light on the human factors that play a key role in RL. We find that the classical reward shaping technique seems to be the most natural method for most designers, both expert and non-expert, to speed up RL. However, we further find that our proposed method SASS can be effectively and efficiently combined with reward shaping, and provides a beneficial alternative to using only a single-speedup method with minimal human designer effort overhead.
UR - https://arxiv.org/abs/1805.05769
UR - https://www.cambridge.org/core/journals/knowledge-engineering-review/article/leveraging-human-knowledge-in-tabular-reinforcement-learning-a-study-of-human-subjects/C6B373298388E622CE1CF032DC2831AF
M3 - مقالة
SN - 0269-8889
VL - 33
SP - 1
EP - 25
JO - Knowledge Engineering Review
JF - Knowledge Engineering Review
M1 - e14
ER -