Speeding up tabular reinforcement learning using state-action similarities

Ariel Rosenfeld, Matthew E. Taylor, Sarit Kraus

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

One of the most prominent approaches for speeding up reinforcement learning is injecting human prior knowledge into the learning agent. This paper proposes a novel method to speed up temporal difference learning by using state-action similarities. These hand-coded similarities are tested in three well-studied domains of varying complexity, demonstrating our approach's benefits.

Original languageEnglish
Title of host publication16th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2017
EditorsEdmund Durfee, Michael Winikoff, Kate Larson, Sanmay Das
Pages1722-1724
Number of pages3
ISBN (Electronic)9781510855076
StatePublished - 2017
Event16th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2017 - Sao Paulo, Brazil
Duration: 8 May 201712 May 2017

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume3

Conference

Conference16th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2017
Country/TerritoryBrazil
CitySao Paulo
Period8/05/1712/05/17

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Speeding up tabular reinforcement learning using state-action similarities'. Together they form a unique fingerprint.

Cite this