The Interaction Lab's original research, from its inception (see the lab profile for the history of the Interaction Lab) was in multi-robot coordination and learning. Almost 15 years later, the focus of the lab's research has shifted, and this is no longer a major area of focus. This page summarizes our past work.
Our research in this area was focused on principled mechanisms for adaptive coordination in multi-robot systems, including how learning techniques can be applied in order to improve system performance and plasticity. In our research, we did not emphasize specific task domains; rather we studied fundamental aspects of autonomous coordination, such as physical interference, group dynamics, and task allocation. Our goal was to develop general models of coordination that are both descriptive and prescriptive, allowing us to analyze and synthesize complex task-oriented multi-robot systems. However, we did not operate in the abstract, but always grounded our work via experimental validation on physical robots and/or realistic sensor-based simulations. In this way we aimed to develop methods that were principled, general, and amenable to analysis, but that did not make unsupportable assumptions about the underlying real-world robotics domain.
The work in the Interaction Lab spanned a variety of different, but complementary, perspectives on the problem of multi-robot coordination. As the list of projects below illustrates, we took inspiration from many fields, including Biology, Ethology, Operations Research, Economics, and Physics. We believe that Robotics, as a fundamentally interdisciplinary field, has much to gain from leveraging relevant work in other areas. Regardless of the particular source of inspiration, common themes run through our work, in this topic area as well as all the others, including: distributed control; online learning and adaptation; scalable coordination; and the derivation of global organization from local rules.
This work is supported by the DARPA LANdroids grant for "ARTeMUS: Agile Robot Teams for Mobile Networking in Urban Settings" in collaboration with Intelligent Automation, Inc., Carnegie Mellon University, and University of Texas at Austin. This research has been supported by NSF IIS Grant for "Automated Synthesis of Distributed Systems" and the DoE Grant for "Multi-Robot Learning in Tightly-Coupled, Inherently Dynamic Domains" DE-FG03-01ER45905, DARPA Grant 5-39509-A under the Mobile Autonomous Robot Software: Heterogeneous Small-Team Behaviors for Mobile Robot in Outdoor Environments (MARS-2020) program, DARPA MARS grant DABT63-99-1-0015, DARPA TASK Grant F30602-00-2-0573 under the Taskable Agent Software Kit program, the Office of Naval Research, the Jet Propulsion Laboratory, and Sandia National Labs.