As robots are expected to perform in increasingly complex domains, the ability to automatically adapt and learn has become more critical. Interaction with people, a single user or different individuals/users, is an especially challenging learning problem, as it involves a "moving target", since people's mood, health state, task expertise, and other factors that are not necessarily accessible to the robot, change at various time-scales and can influence behavior in unexpected ways.
Human-robot interaction requires a unique perspective on the learning process. Traditional machine learning techniques typically require extensive data, but people typically do not have the patience to provide ample training examples, nor do they always act consistently. Finally, people's behavior changes over time, on multiple timescales. Additionally, people require a natural interface. Thus, learning and adaptation in HRI must find a balance between power and robustness of algorithms, allowing for and leveraging having people in the loop, and adapting quickly enough for human tolerance and with limited data. Our work on learning is driven by several motivations:
More details regarding the experimental test-beds for our imitation work is available here.
This work has been supported by DARPA Grant 5-39509-A under the Mobile Autonomous Robot Software (MARS-2020) program, National Science Foundation Grant No. 9896322, and DARPA Grant No. DABT63-99-1-0015.
Support is also provided by Intelligent Automation, Inc. and the Office of the Secretary of Defense grant W81XWH-09-C-0134.