Robotics Research Lab
CRES
USC Computer Science
USC Engineering
USC
/ Research / Projects / Social Primitives for a Situated and Embodied Interactive Agent

Overview Approach Contacts

Top
Overview

In the past decade, significant effort has been made in robots that are interactive and expressive; however, many of these robots vary in size, shape, and functionality. Thus, the physical characteristics of the system dictate its expressive capabilities. Moreover, the characteristics of a human user influence the way that the robot interacts; more specifically, the sensing and processing (i.e., interpretation) abilities of the user determine the channels (modalities) and dynamics of communication that will be utilized by the robot to transmit information. We consider the capabilities of both the robot and the user in the production of multi-modal expressive behaviors to facilitate the specific needs and/or preferences of the user in an interaction.

Top
Approach

Through analysis of dyadic and group-level social interaction, we aim to identify what we have come to call “social primitives” -- atomic or micro behaviors that serve as building blocks to facilitate an interaction. A rich body of work on extracting human motor primitives (i.e., of an individual) already exists in our lab. We have also considered the movement patterns of large groups of people. Currently, studies are being conducted regarding dyadic and triadic interactions as well. This research seeks to unify this corpus of data in modeling a spectrum of a social behavior, which could be used to derive specific features and primitives that guide a particular interaction.

We apply a behavior-based approach to this social context, where the actions of the robot are dictated by the dynamics of its interaction with the user (see related work on basis behaviors). This premise distinguishes itself from similar work in human-robot interaction, which consider complex modeling and planning techniques to govern an interaction. The robot is both physically embodied and situated in a social scenario, serving as an active and present entity that can guide an expressive dialogue and react to changes in user modality (both sensing and acting).

This work will initially be validated on typically-developed user groups, and then extended to focus on special needs populations, such as post-stroke rehabilitation patients and children with autism, in an effort to optimize or improve human task performance.

Contacts

Ross Mead