Robots have become increasingly capable of performing a variety of tasks in real-world dynamic environments, including those involving humans. Beyond competently performing the tasks required of them, service robots should also be able to coordinate their actions with those of the people around them in order to minimize conflicting action and provide effective assistance without relying on a strict command and control structure. Humans coordinate their actions in a variety of task settings through structured social interaction aimed at representation alignment and intentional feedback. In order for a robot to naturally coordinate its actions using similar modalities it must be capable of interpreting and producing relevant communicative behavior as the task progresses. Within collaborative settings, defined as two or more agents working together to achieve shared goals, this work aims to develop a method for constructing and generalizing models of communication dynamics during joint human-robot task performance and to verify performance in different task contexts and with different populations and role hierarchies.
This work is motivated by three main goals:
The specific context of collaboration allows for the simplifying assumption that participants are working together to accomplish a shared set of goals. This enables the robot to make use of its own encoding of the task to evaluate the actions of others via perspective taking. In order to effectively coordinate the robot's actions with those of its human collaborator, the robot must be able to accurately estimate the person's planned actions from context or from explicit communication. Analogously, the robot must be able to effectively convey its planned actions clearly to people. This ability to attribute mental state to others and use it to plan and predict behavior is called Theory of Mind (ToM) and has been extensively used for various capabilities with autonomous robots. We propose a ToM-inspired model in which the robot contains estimates of its own state, the state of collaborators, and those collaborators' estimates of the robot's state. These states contain information relevant to the task including a world model and a partial task allocation i.e., assignments of various agents to sub-tasks. Through the use of theory of mind, in which the robot attributes mental state to others, we will construct a state space consisting of possible task trajectories consisting of a multi-agent task allocation. To accomplish this, it must first actively model its collaborators, possess a manipulable representation of the task, and be able to evaluate the environment from other perspectives. These capabilities will be combined with a reinforcement learning approach to enable the robot to detect whether its plans are aligned with others, to issue coordinating communication, and potentially to identify assistive opportunities over the course of a collaborative task.
The approach will be validated across multiple, different tasks and different users, drawn from various populations as we anticipate different preferences in communication style. Validation will be conducted using a projector-based augmented reality system which will allow for controlled, repeatable task scenarios and monitioring.
This work is supported by National Science Foundation (NSF) grants CNS- 0709296, IIS-0803565, and IIS-0713697 and ONR MURI grant N00014-09-1-1031.
Open-source development of this project has utilized the Robot Operating System (ROS) developed by Willow Garage.