Fun Stuff     

Research >> Projects

Imitation Using Perceptual-Motor Primitives Project Descripton The goal of this project is to implement a version of perceptual-motor primitives within the framework of our imitation model.  Our imitation model consists of five main subcomponents: Tracking, Attention, Learning, Classification, and Actuation.  These components are divided into three layers: Perception, Encoding, and Action.  The following figure illustrates the structure of the model:

The first layer, Perception, consists of two components, Tracking and Attention, that serve to acquire and prepare motion information for processing into primitives at the Encoding layer.  The Tracking component is responsible for extracting the motion of features over time from the perceptual inputs.  The Attention component is responsible for selecting relevant information from the perceived motion stream in order to simplify the Classification and Learning components.  The Encoding layer of the model encompasses Classification and Learning, which classify observed motions into appropriate primitives.  The Learning component serves to determine and refine the set of primitives.  The Classification component then uses the current set of primitives for movement encoding.  Observed movement is encoded into a list of segments that indicate time intervals and parameters for actuating each primitive.  The final layer, Action, consists of a single component, Actuation, which performs the imitation by actuating the list of segments provided by the Classification component.

Based on this imitation model, we implemented a proof-of-concept system to imitate the behavior of a human performer.  The tracking data used for this implementation were obtained using a 2.5D upper body tracking system developed by Stefan Weber.  The attention mechanism in this system simply focused on the locations of the endpoints (i.e. hands).  In place of a learned set of primitives, a human subject performed a sequence of motions, including line, circle, and arc trajectories of the endpoints, to yield a set movements that serve as a set of perceptual-motor primitives.  Using these primitives, we implemented a vector quantization based classification mechanism.  With postprocessing of the classification results, the classifier provides a desired via-point trajectory for each arm endpoint.  These trajectories are then actuated using impedance control on our 20 DOF humanoid simulation, Adonis.

  • Maja J Mataric, Odest C. Jenkins, Ajo Fod, and Victor Zordan, "Control and Imitation in Humanoids", AAAI Fall Symposium on Simulating Human Agents, North Falmouth, MA, Nov 3-5, 2000.
 For a complete list of our imitation-related publications, please look here.

  For more information about this project: contact Chad Jenkins at or Maja Mataric at


This work is supported by DARPA Grant DABT63-99-1-0015 under the Mobile Autonomous Robot Software (MARS) program, a National Science Foundation Career Grant (No. 9896322) to M. Mataric, a USC All-University Pre-Doctoral Fellowship to C. Jenkins, and a Fullbright Fellowship to S. Weber. The humanoid simulator was obtained from Jessica Hodgins.

Home | Email: agents (at) | Last Update 08/05/2007