Leveraging human knowledge in tabular reinforcement learning: A study of human subjects

A. Rosenfeld, M. Cohen, M. Taylor, S. Kraus

Research output: Contribution to journalArticlepeer-review


Reinforcement learning (RL) can be extremely effective in solving complex, real-world problems. However, injecting human knowledge into an RL agent may require extensive effort and expertise on the human designer's part. To date, human factors are generally not considered in the development and evaluation of possible RL approaches. In this article, we set out to investigate how different methods for injecting human knowledge are applied, in practice, by human designers of varying levels of knowledge and skill. We perform the first empirical evaluation of several methods, including a newly proposed method named State Action Similarity Solutions (SASS) which is based on the notion of similarities in the agent's state–action space. Through this human study, consisting of 51 human participants, we shed new light on the human factors that play a key role in RL. We find that the classical reward shaping technique seems to be the most natural method for most designers, both expert and non-expert, to speed up RL. However, we further find that our proposed method SASS can be effectively and efficiently combined with reward shaping, and provides a beneficial alternative to using only a single-speedup method with minimal human designer effort overhead.
Original languageEnglish
Article numbere14
Pages (from-to)1-25
Number of pages25
JournalKnowledge Engineering Review
StatePublished - 17 Sep 2018


Dive into the research topics of 'Leveraging human knowledge in tabular reinforcement learning: A study of human subjects'. Together they form a unique fingerprint.

Cite this