Harnessing Reinforcement Learning for Neural Motion Planning

Tom Jurgenson, Aviv Tamar

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Motion planning is an essential component in most of today’s robotic applications. In this work, we consider the learning setting, where a set of solved motion planning problems is used to improve the efficiency of motion planning on different, yet similar problems. This setting is important in applications with rapidly changing environments such as in e-commerce, among others. We investigate a general deep learning based approach, where a neural network is trained to map an image of the domain, the current robot state, and a goal robot state to the next robot state in the plan. We focus on the learning algorithm, and compare supervised learning methods with reinforcement learning (RL) algorithms. We first establish that supervised learning approaches are inferior in their accuracy due to insufficient data on the boundary of the obstacles, an issue that RL methods mitigate by actively exploring the domain. We then propose a modification of the popular DDPG RL algorithm that is tailored to motion planning domains, by exploiting the known model in the problem and the set of solved plans in the data. We show that our algorithm, dubbed DDPG-MP, significantly improves the accuracy of the learned motion planning policy. Finally, we show that given enough training data, our method can plan significantly faster on novel domains than off-the-shelf sampling based motion planners. Results of our experiments are shown in https://youtu.be/wHQ4Y4mBRb8.

Original languageEnglish
Title of host publicationRobotics
Subtitle of host publicationScience and Systems XV
EditorsAntonio Bicchi, Hadas Kress-Gazit, Seth Hutchinson
PublisherMIT Press Journals
ISBN (Print)9780992374754
DOIs
StatePublished - 2019
Event15th Robotics: Science and Systems, RSS 2019 - Freiburg im Breisgau, Germany
Duration: 22 Jun 201926 Jun 2019

Publication series

NameRobotics: Science and Systems

Conference

Conference15th Robotics: Science and Systems, RSS 2019
Country/TerritoryGermany
CityFreiburg im Breisgau
Period22/06/1926/06/19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Harnessing Reinforcement Learning for Neural Motion Planning'. Together they form a unique fingerprint.

Cite this