Maja J Mataric´
Facilitating Robot Learning
The ability to improve behavior through learning is the hallmark of
intelligence, and thus the ultimate challenge of AI and robotics. We
propose that one of the stumbling blocks in scaling up learning (to
real-world problems and domains) has come from the emphasis of
mathematically pure rather than practical approaches. Striving for
optimality has overshadowed the plight for efficiency, and the attempt
to avoid experimental bias has crippled the ability to focus on
principled methods for building in domain knowledge. In what we call
"great expectation", most have come to expect that a learning
algorithm must achieve optimal performance with little or no built in
knowledge. To make this possible, the algorithm is allowed to learn
from prohibitively many examples and/or for a prohibitively long time.
This trade-off between a priori bias and learning time is
impractical for a great majority practical applications. Furthermore,
it is counter to the way biological systems learn.
We argue that taking inspiration from biology, focusing on real-world
domains and tasks, and experimentally validating all algorithms in
such domains, will result in more scalable approaches. In our own
work, we pursue an experimental approach in two highly uncertain,
dynamic, and high-dimensional domains: multi-robot learning, and
learning by imitation. Both force us to deal with perceptual and
action uncertainty, non-stationarity, and real-time constraints. As a
result, our approaches to learning, inspired by learning in biology,
strive for efficient solutions that can cope with the challenges of
The key principles we take as inspiration from biological learning
biological systems do not start from a tabula raza,but
instead utilize a great deal of innate structure and knowledge
biological systems are capable of fast generalization and do not
rely on large numbers of trials (with the exception of motor control,
where fine-tuning, especially in early development, involves a great
deal of repetition)
biological learning proceeds gradually, in stages, each of which
provides a level of increased competence
biological learning occurs in a complex ecological niche,
involving interaction with rich and highly structured environments
biological learning involves a multitude of tasks and goals in parallel
biological learning occurs in a social context
biological learning achieves efficiency, not optimality
Our own work in multi-robot learning has had to deal with the
challenges uncertainty and non-stationarity, and has focused on the
the use of built-in structure, in the form of a
the use of structured
reinforcement, in the form of shaping, and other types of
the use of communication to alleviate
partial observability & credit assignment problems
the use of interaction with other agents, in the forms of:
1) stigmergy (observing the effects of the actions of others)
3) observation & imitation
These ideas are briefly discussed in an NSF workshop position paper, found here.
We have worked on learning behavior selection, as well as on the more
general problem learning models. Our work has demonstrated single and
multi-robot learning in a variety of problems, including:
a group of four robots concurrently learning to forage
a group of four robots learning social rules for yielding and
two robots learning to cooperatively push a box
one robot learning collection in a dynamically changing
environment with other robots and obstacles
two learning robots in a dynamic environment with
other robots and obstacles; robot specialization emerged
a robot learning models of its interactions with the environment
The following is a list of selected papers describing our results
in robot learning:
Dani Goldberg and Maja J Mataric´, "Learning
Multiple Models for Reward Maximization,", Proceedings, The
Seventeenth International Conference on Machine Learning
(ICML-2000), Stanford University, June 29-July 2, 2000.
Dani Goldberg and Maja J Mataric´, "Reward maximization in a
Non-Stationary Mobile Robot Environment", Proceedings,
Autonomous Agents 2000, Barcelona, Spain, June 3-7,
Aude Billard and Maja J Mataric´, "A biologically inspired robotic
model for learning by imitation", Proceedings, Autonomous
Agents 2000, Barcelona, Spain, June 3-7, 2000.
Francois Michaud and Maja J Mataric´,
"Representation of behavioral history for learning in nonstationary
conditions", Robotics and Autonomous Systems, 29(2), Nov
Maja J Mataric´,
"Using Communication to Reduce Locality in Distributed Multi-Agent
Learning", Journal of Experimental and Theoretical Artificial
Intelligence, special issue on Learning in DAI Systems, Gerhard
Weiss, ed., 10(3), Jul-Sep, 1998, 357-369.
Francois Michaud and Maja J Mataric´, "Learning from History for
Behavior-Based Mobile Robots in Non-stationary Conditions", joint
special issue on Learning in Autonomous Robots, Machine
Learning, 31(1-3), 141-167, and Autonomous Robots, 5(3-4),
Jul/Aug 1998, 335-354.
Maja J Mataric´, "Learning
Social Behavior", Robotics and Autonomous Systems, 20,
Maja J Mataric´, "Reinforcement
Learning in the Multi-Robot Domain", Autonomous Robots,
4(1), Mar 1997, 73-83.
Maja J Mataric´, "Reward
Functions for Accelerated Learning" in Machine Learning:
Proceedings of the Eleventh International Conference, William
W. Cohen and Haym Hirsh, eds., Morgan Kaufmann Publishers, San
Francisco, CA, 1994, 181-189.
LI> Dani Goldberg and Maja J Mataric´, "Coordinating Mobile Robot Group
Behavior Using a Model of Interaction Dynamics", Proceedings,
Autonomous Agents '99, Seattle, WA, May 1-3, 1999. (This paper
received the ACM Student Paper award.)
Maja J Mataric´, "Learning
in Multi-Robot Systems", in Adaptation and Learning in
Multi-Agent Systems, Gerhard Weiss and Sandip Sen, eds., Lecture
Notes In Artificial Intelligence (LNAI), 1042, Springer-Verlag 1996,
Maja J Mataric´ and Rodney A. Brooks, "Learning a
Distributed Map Representation Based on Navigation Behaviors", in
Cambrian Intelligence, MIT Press, 1999, 37-58.
Rodney A. Brooks and Maja J Mataric´, "Real Robots, Real Learning
Problems", in Robot Learning, Jonathan H. Connell and Sridhar
Mahadevan, eds., Kluwer Academic Press, 1993, 193-213.
For details about these efforts, please see my projects
page and my publications page.
Go to Maja's home page.
Go to The
Interaction Lab home page.
Mail comments to email@example.com.