Learning Robot Objectives from Physical Human Interaction

Abstract

When humans and robots work in close proximity, physical interaction is inevitable. Traditionally, robots treat physical interaction as a disturbance, and resume their original behavior after the interaction ends. In contrast, we argue that physical human interaction is informative: it is useful information about how the robot should be doing its task. We formalize learning from such interactions as a dynamical system in which the task objective has parameters that are part of the hidden state, and physical human interactions are observations about these parameters. We derive an online approximation of the robot’s optimal policy in this system, and test it in a user study. The results suggest that learning from physical interaction leads to better robot task performance with less human effort.

Description
Advisor
Degree
Type
Journal article
Keywords
Citation

Bajcsy, Andrea, Losey, Dylan P., O’Malley, Marcia K., et al.. "Learning Robot Objectives from Physical Human Interaction." Proceedings of Machine Learning Research, 78, (2017) PMLR: 217-226. https://hdl.handle.net/1911/102348.

Has part(s)
Forms part of
Published Version
Rights
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Link to license
Citable link to this page