Control of humanoid robots and agents is one of the exciting areas of robotics, and one that the Interaction Lab has focused on in the past. This is no longer an area of research focus for the lab. Instead, we study embodied humanoid control in the context of other active Research Areas, formerly also described as Socially Assistive Robotics and Activity Modeling. This page summarizes our past work on humanoid control.
Humanoid robots are intrinsically dextrous, employing redundant degrees-of-freedom (DOF) to accomplish various task in multiple ways. This flexibility comes at the price of complexity of control; greater DOFs present a major challenge for efficient control and learning. We focused on the theory of motor primitives, inspired by neuroscience evidence, which posits an underlying basis for articulated movement. The presence of such a set of basis primitives serves to constrain humanoid movement, thereby simplifying control. Through sequencing and superposition, the primitives provide a vocabulary for a large and flexible movement repertoire.
Our work explored methods for automatically determining the vocabulary of motor primitives using nonlinear spatio-temporal dimensionality reduction. This line of research tested the hypothesis that human movement lies on a manifold of lower dimension than that determined by our degrees-of-freedom.
Another area of the lab's past research focused on using a manually determined set of motor primitives to generate and recognize motion of simulated humanoid robots. This research treated the motor primitive as a parameterized model in order to generate, classify, and predict movements. Completed research drew from behavior-based control methods to control humanoids in a dynamic environment; additionally, we used Bayesian classification to perform real-time motion classification. Current research focuses on using the primitive as a predictive model.
Another major area of study in the domain of articulated control is learning by imitation, which we continue to pursue (see User Adaptation and Learning through HRI). We continue to use neuroscience evidence for mirror neurons as inspiration for sensory-motor integration. Toward this end, we are using movement primitives, described above, as a foundation for not only control but also movement preception, classification, and understanding. The imitation project is described in more detail here.
This research was supported by DARPA Grant 5-39509-A under the Mobile Autonomous Robot Software: Heterogeneous Small-Team Behaviors for Mobile Robot in Outdoor Environments (MARS-2020) programs, the National Science Foundation, under Career Grant No. 9896322, and by DARPA MARS grant no. DABT63-99-1-0015.