Browsing by Author "Losey, Dylan P."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item A Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human–Robot Interaction(ASME, 2018) Losey, Dylan P.; McDonald, Craig G.; Battaglia, Edoardo; O’Malley, Marcia K.As robotic devices are applied to problems beyond traditional manufacturing and industrial settings, we find that interaction between robots and humans, especially physical interaction, has become a fast developing field. Consider the application of robotics in healthcare, where we find telerobotic devices in the operating room facilitating dexterous surgical procedures, exoskeletons in the rehabilitation domain as walking aids and upper-limb movement assist devices, and even robotic limbs that are physically integrated with amputees who seek to restore their independence and mobility. In each of these scenarios, the physical coupling between human and robot, often termed physical human robot interaction (pHRI), facilitates new human performance capabilities and creates an opportunity to explore the sharing of task execution and control between humans and robots. In this review, we provide a unifying view of human and robot sharing task execution in scenarios where collaboration and cooperation between the two entities are necessary, and where the physical coupling of human and robot is a vital aspect. We define three key themes that emerge in these shared control scenarios, namely, intent detection, arbitration, and feedback. First, we explore methods for how the coupled pHRI system can detect what the human is trying to do, and how the physical coupling itself can be leveraged to detect intent. Second, once the human intent is known, we explore techniques for sharing and modulating control of the coupled system between robot and human operator. Finally, we survey methods for informing the human operator of the state of the coupled system, or the characteristics of the environment with which the pHRI system is interacting. At the conclusion of the survey, we present two case studies that exemplify shared control in pHRI systems, and specifically highlight the approaches used for the three key themes of intent detection, arbitration, and feedback for applications of upper limb robotic rehabilitation and haptic feedback from a robotic prosthesis for the upper limb.Item Effects of discretization on the K-width of series elastic actuators(IEEE, 2017) Losey, Dylan P.; O’Malley, Marcia K.Rigid haptic devices enable humans to physically interact with virtual environments, and the range of impedances that can be safely rendered using these rigid devices is quantified by the Z-Width metric. Series elastic actuators (SEAs) similarly modulate the impedance felt by the human operator when interacting with a robotic device, and, in particular, the robot's perceived stiffness can be controlled by changing the elastic element's equilibrium position. In this paper, we explore the K-Width of SEAs, while specifically focusing on how discretization inherent in the computer-control architecture affects the system's passivity. We first propose a hybrid model for a single degree-of-freedom (DoF) SEA based on prior hybrid models for rigid haptic systems. Next, we derive a closed-form bound on the K-Width of SEAs that is a generalization of known constraints for both rigid haptic systems and continuous time SEA models. This bound is first derived under a continuous time approximation, and is then numerically supported with discrete time analysis. Finally, experimental results validate our finding that large pure masses are the most destabilizing operator in human-SEA interactions, and demonstrate the accuracy of our theoretical K-Width bound.Item Enabling Robots to Infer How End-Users Teach and Learn Through Human-Robot Interaction(IEEE, 2019) Losey, Dylan P.; O'Malley, Marcia K.During human-robot interaction, we want the robot to understand us, and we want to intuitively understand the robot. In order to communicate with and understand the robot, we can leverage interactions, where the human and robot observe each other's behavior. However, it is not always clear how the human and robot should interpret these actions: a given interaction might mean several different things. Within today's state of the art, the robot assigns a single interaction strategy to the human, and learns from or teaches the human according to this fixed strategy. Instead, we here recognize that different users interact in different ways, and so one size does not fit all. Therefore, we argue that the robot should maintain a distribution over the possible human interaction strategies, and then infer how each individual end-user interacts during the task. We formally define learning and teaching when the robot is uncertain about the human's interaction strategy, and derive solutions to both problems using Bayesian inference. In examples and a benchmark simulation, we show that our personalized approach outperforms standard methods that maintain a fixed interaction strategy.Item Improving short-term retention after robotic training by leveraging fixed-gain controllers(Sage, 2019) Losey, Dylan P.; Blumenschein, Laura H.; Clark, Janelle P.; O’Malley, Marcia K.Introduction: When developing control strategies for robotic rehabilitation, it is important that end-users who train with those strategies retain what they learn. Within the current state-of-the-art, however, it remains unclear what types of robotic controllers are best suited for promoting retention. In this work, we experimentally compare short-term retention in able-bodied end-users after training with two common types of robotic control strategies: fixed- and variable-gain controllers. Methods: Our approach is based on recent motor learning research, where reward signals are employed to reinforce the learning process. We extend this approach to now include robotic controllers, so that participants are trained with a robotic control strategy and auditory reward-based reinforcement on tasks of different difficulty. We then explore retention after the robotic feedback is removed. Results: Overall, our results indicate that fixed-gain control strategies better stabilize able-bodied users’ motor adaptation than either a no controller baseline or variable-gain strategy. When breaking these results down by task difficulty, we find that assistive and resistive fixed-gain controllers lead to better short-term retention on less challenging tasks but have opposite effects on the learning and forgetting rates. Conclusions: This suggests that we can improve short-term retention after robotic training with consistent controllers that match the task difficulty..Item Learning Robot Objectives from Physical Human Interaction(PMLR, 2017) Bajcsy, Andrea; Losey, Dylan P.; O’Malley, Marcia K.; Dragan, Anca D.When humans and robots work in close proximity, physical interaction is inevitable. Traditionally, robots treat physical interaction as a disturbance, and resume their original behavior after the interaction ends. In contrast, we argue that physical human interaction is informative: it is useful information about how the robot should be doing its task. We formalize learning from such interactions as a dynamical system in which the task objective has parameters that are part of the hidden state, and physical human interactions are observations about these parameters. We derive an online approximation of the robot’s optimal policy in this system, and test it in a user study. The results suggest that learning from physical interaction leads to better robot task performance with less human effort.Item Maintaining subject engagement during robotic rehabilitation with a minimal assist-as-needed (mAAN) controller(IEEE, 2017) Pehlivan, Ali Utku; Losey, Dylan P.; Rose, Chad G.; O’Malley, Marcia K.One challenge of robotic rehabilitation interventions is devising ways to encourage and maintain high levels of subject involvement over long duration therapy sessions. Assist-as-needed controllers have been proposed which modulate robot intervention in movements based on measurements of subject involvement. This paper presents a minimal assist-as-needed controller, which modulates allowable error bounds and robot intervention based on sensorless force measurement accomplished through a nonlinear disturbance observer. While similar algorithms have been validated using healthy subjects, this paper presents a validation of the proposed mAAN control algorithm's ability to encourage user involvement with an impaired individual. User involvement is inferred from muscle activation, measured via surface electromyography (EMG). Experimental validation shows increased EMG muscle activation when using the proposed mAAN algorithm compared to non-adaptive algorithms.Item Responding to Physical Human-Robot Interaction: Theory and Approximations(2018-11-27) Losey, Dylan P.; O'Malley, Marcia KThis thesis explores how robots should respond to physical human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our everyday lives. Unlike earlier robots---which were developed for carefully regulated factory settings---today's robots must work alongside human end-users, and even facilitate physical interactions between the robot and the human. Within the current state-of-the-art, the human's intentionally applied forces are treated as unwanted disturbances that the robot should avoid, reject, or ignore: once the human stops interacting, these robots simply return to their original behavior. By contrast, we recognize that physical interactions are really an implicit form of communication: the human is applying forces and torques to correct the robot's behavior, and teach the robot how it should complete its task. Within this work, we demonstrate that optimally responding to physical human interactions results in robots that learn from these corrections and change their underlying behavior. We first formalize physical human-robot interaction as a partially observable dynamical system, where the human's applied forces and torques are observations about the objective function that the robot should be optimizing, and, more specifically, the human's preferences for how the robot should behave. Solving this system defines the right way for a robot to respond to physical corrections. We derive three approximate solutions for real-time implementation on robotic hardware: these different approximations assume increasing amounts of structure, and consider cases where the robot is given (a) an arbitrary initial trajectory, (b) a parameterized initial trajectory, or (c) the task-related features. We next extend our approximations to account for noisy and imperfect end-users, who may accidentally correct the robot more or less than they intended. We enable robots to reason over what aspects of the human's interaction were intentional, and which of the human's preferences are still unclear. Our overall approach to physical human-robot interaction provides a theoretical basis for robots that both realize why the human is interacting and personalize their behavior in response to that end-user. The feasibility of our theoretical contributions is demonstrated through simulations and user studies.Item Trajectory Deformations From Physical Human–Robot Interaction(IEEE, 2018) Losey, Dylan P.; O’Malley, Marcia K.Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human-robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that-despite provisions for the human to modify the robot's current trajectory-the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone.