Research >> Projects
Learning of Perceptuo-Motor Primitives
The goal of this project is to study learning of perceptuo-motor
primitives, a key part of our
imitation model. We use human movement data as the basis for
extracting the primitives, and then use those primitives for
reconstructing various observed movement on a humanoid robot. The
following figure is a visualization of the proposed model.
The perceptual component of the system provides "raw"
motion data. The data are gathered through one of three sources:
a synthetic motion generator that I have developed, a vision system
developed by Stefan Weber for extracting stick figure motion, or
motion capture data from
psychophysical experiments. The learning takes place in two
components of the model: the motion preprocessor and the primitive set
determination modules. The former performs time scaling,
computation of feature centers, introduction of invariances to
translation, scale, and rotation, and segmentation in time. The
last is the most challenging of the proprocessing tasks, and we are
exploring different approaches to address it.
In the context of this work, the primitive set is
the basis set of behaviors that is sufficient for generating a
large repertoire of movements, through composition operators of
sequencing and superposition. We call these generative
primitives to distinguish them from evolutionary primitives,
such as spinal central pattern generators.
The primary focus
of this project is with regard to primitive set determination, i.e.,
the process of automatically extracting a set of primitives from the
movement data. To address this problem, we are focusing on the
issue of appropriately representing the input motion so as to
facilitate primitive extraction. The encoding and decoding of
this representations are handled by the Motion Preprocessor and the
Motor Controller, respectively.
The Motion Preprocessing module is responsible for converting "raw"
perceptual data (given knowledge about kinematic substructures) into
the intermediate form. This component performs two major
operations: segmentation and normalization. Motion segmentation
is a difficult problem, and will not be explicitly addressed within
the scope of this project. Instead, the incoming motion stream
will be assumed as an atomic motion. The normalization procedure
is performed in order to apply useful invariances to the motion
data. But what invariances are most useful? From, the motion of
an end-point with invariance to absolute position (translation) and
scale appear to provide some good normalization results.
Rotational invariance can be added by specifying more information
about a significant kinematic substructures.
contention is that a kinematic substructure in the intermediate form
should be specified by a base, a feature center (feature centroid),
and an end-point. Considering a human arm, the base and
end-point could be represented by the shoulder and middle fingertip
locations, respectively. The feature center would be the mean
location of all of the features of the arm. The feature
locations could consist of the shoulder, elbow, wrist, and middle
fingertip. The intuition for using this representation is that
the main purpose of the motion for arm substructure is to generate the
end-point trajectory, but information about the intermediate joints
cannot be discarded. The feature center provides a meaningful
estimation of the configuration of the intermediate features and where
a behavior is happening with respect to a larger coordinate
frame. Rotational invariance is now added to the motion data by
constructing a new coordinate frame at the substructure base with the
vector from the base to the feature center serving as the system's
z-axis and transforming the end-point information into this
system. The motion data is now the end-point in the rotationally
normalized system (what is the behavior) and the feature center in
substructure coordinate (where the behavior is being performed).
The remaining normalization issues to be addressed in this component
is that of motion duration. In order to handle movements of
various number of samples, we have made some initial attempts using
wavelet decompositions, primarily with the simple Haar wavelet.
Primitive Set Determination
This component is responsible for clustering various incoming motion
segments into meaningful classes of behaviors. Two main
operations take place: clustering and stereotyping. From the
preprocessing procedure, various motion segments should cluster into
meaningful classes of behaviors. The motion data are clustered
based primarily on the end-point information, with the feature center
being clustered in a supplementary space. The clustering
procedure may be as simple as one of the heuristic methods, but all of
these methods require a known number of classes or some other complex
parameter. Once the clusters are formed, they must be
stereotyped (generalized) in order to generate appropriate
paramerizations. From these stereotypical descriptions, the
basis set of primitives is formed.
We are currently exploring different approaches to forming motor
primitives. One is based on of substructure motion; it uses
substructure information to compute desired trajectories for each
joint, which could then be actuated by an appropriate form of PID
For a video of the imitation system in action, using a set of
primitives hand-selected from movement data, look here.
For a method of
automatically extracting primitives from joint angle data, look
- Odest Chadwicke Jenkins and Maja J Mataric´, "Deriving Action and Behavior Primitives from Human Motion Data". In the IEEE/RSJ Internation Conference on Intelligent Robots and Systems (IROS-2002), pages 2551-2556, Lausanne, Switzerland, 2002. [PS]
Odest Chadwicke Jenkins, Maja J Mataric´, and Stefan Weber, "Primitive-Based
Movement Classification for Humanoid Imitation", Proceedings, First
IEEE-RAS International Conference on Humanoid Robotics (Humanoids-2000),
MIT, Cambridge, MA, Sep 7-8, 2000. (Accompanying
Ajo Fod, Maja J Mataric´, and Odest Chadwicke Jenkins, "Automated
Derivation of Primitives for Movement Classification" , Proceedings,
First IEEE-RAS International Conference on Humanoid Robotics (Humanoids-2000),
MIT, Cambridge, MA, Sep 7-8, 2000.
Stefan Weber, Chad Jenkins, and Maja J Mataric´, "Imitation
Using Perceptual and Motor Primitives", Proceedings, Autonomous Agents
2000, Barcelona, Spain, June 3-7, 2000, 136-137.
Maja J Mataric´, Odest C. Jenkins, Ajo Fod, and Victor Zordan, "Control
and Imitation in Humanoids", AAAI Fall Symposium on Simulating Human
Agents, North Falmouth, MA, Nov 3-5, 2000.
This work is supported by DARPA Grant DABT63-99-1-0015 under
the Mobile Autonomous Robot Software (MARS) program, and in part by
the National Science Foundation under Grant No. 9896322.
For a complete list of our imitation-related publications,
please look here.
For more information about this project: contact Chad
Jenkins at firstname.lastname@example.org
or Maja Mataric at email@example.com.