Our goal is to extend a robot's model of interaction with humans so that it can induce changes in a human' s behavior and also express its intentions in a way that humans could easily understand. The challenge we are addressing is allowing the interaction to occur without the need of an explicitly shared vocabulary between the robot and the human.
One important advantage of using a body language technique is that the method is not restricted to having a robot with a humanoid body or face: the approach does not require structural body similarities between the interacting agents in order to achieve successful interaction. Even if there is no exact mapping between a mobile robot's physical characteristics and those of a human user, the robot is still able to convey a message to the human. We rely on implicit interaction, which can be achieved by designing an applicable subset of body movements and behaviors for the robot which are already known and understandable by humans through their common sense.
The strategy we apply is as follows: a robot, if failing to execute some actions, will search for a human and attempt to induce him to come along to the place where the problem occurred and then demonstrate its intentions in hopes of obtaining help from the human. The process of getting help is initiated when the robot abandons a failed task, wanders around until finding a human, and then follows him continuously until the human stops (if not stopped/seated already). At that point, the robot backs up and approaches (much like a dog) in hopes of enticing the human to follow it. From there, the robot turns around and leads the human to the point where its task execution failed, then retries to perform the same actions in front of its human helper.
This work is supported by DARPA Grant DABT63-99-1-0015 under the Mobile Autonomous Robot Software (MARS) program, the National Science Foundation Grant No. 9896322, and by the ONR Defense University Research Instrumentation Program Grant.