How Much Do Unstated Problem Constraints Limit Deep Robotic Reinforcement Learning?

Date
2019
Journal Title
Journal ISSN
Volume Title
Publisher
Description
Abstract

Deep Reinforcement Learning is a promising paradigm for robotic control which has been shown to be capable of learning policies for high-dimensional, continuous control of unmodeled systems. However, Robotic Reinforcement Learning currently lacks clearly defined benchmark tasks, which makes it difficult for researchers to reproduce and compare against prior work. “Reacher” tasks, which are fundamental to robotic manipulation, are commonly used as benchmarks, but the lack of a formal specification elides details that are crucial to replication. In this paper we present a novel empirical analysis which shows that the unstated spatial constraints in commonly used implementations of Reacher tasks make it dramatically easier to learn a successful control policy with Deep Deterministic Policy Gradients (DDPG), a state-of-the-art Deep RL algorithm. Our analysis suggests that less constrained Reacher tasks are significantly more difficult to learn, and hence that existing de facto benchmarks are not representative of the difficulty of general robotic manipulation.

Description
Advisor
Degree
Type
Technical report
Keywords
Citation

Lewis, W. Cannon II, Moll, Mark and Kavraki, Lydia E.. "How Much Do Unstated Problem Constraints Limit Deep Robotic Reinforcement Learning?." (2019) https://doi.org/10.25611/az5z-xt37.

Has part(s)
Forms part of
Rights
You are granted permission for the noncommercial reproduction, distribution, display, and performance of this technical report in any format, but this permission is only for a period of forty-five (45) days from the most recent time that you verified that this technical report is still available from the Computer Science Department of Rice University under terms that include this permission. All other rights are reserved by the author(s).
Link to license
Citable link to this page