Retired Robots
This page describes some of our older robots and completed projects. Click here for our active robot page.

hopper The hopper is a one-legged hopping robot whose purpose is to study the balance and energetics of dynamically stable systems. Different control systems have been developed including a neural network for speed control and a model-based height controller. The robot is also capable of a scooting mode of locomotion using only the actuators utilized in hopping.


bandit Bandit-I is a prototype humanoid robot. To construct a robot that is cost-effective, we have made the robot out of servo-motors and rapid-prototyping (rp) material. It has two 6-DOF arms, a head than can pan and tilt, and a face with movable mouth and eye-brows, and stereo-cameras for eyes. Our goal for this robot is to have a platform to interact with people with expressive gestures, and facial expressions.


clara Clara is a modular mobile robot platform used in a suite of assistive robotics projects we are pursuing under the general theme of Hands-Off Physical Therapy. The robot has been used in experiments with post-cardiac surgery patients, where it provided guidance and motivation for tiring and potentially painful breathing exercises. Clara, named after Clara Barton, the founder of the American Red Cross, is mounted on an ActivMedia Pioneer 2DX base, and equipped with a SICK LMS200 scanning laser range-finder, a Sony pan-tilt camera, and a Shure 503BG microphone. Visual displays are shown on a laptop screen positioned near the top of the robot; the majority of the robot frame and wires are covered by a custom canvas outfit. Presently, Clara is being used to evaluated the importance of robot embodiment in assistive human-robot interaction domains.


robotmote Robomote is a sample mobile platform for sensor networks. It was designed with the objective of being cheap and compatible with the Berkeley mote. It has an Atmel 8535 processor for control, IR sensors for obstacle avoidance and a Magnetometer for bearing. It comes with a TinyOS API so that the robomote could be controlled from the mote via TinyOS.

Sensor Network Testbed

testbed The testbed is an ideal platform for experiments in mobile sensor networks. It comprises of both 12 static mica1 motes and one or more mobile motes(robomote). The vision system provides Groundtruth location awareness, providing us with sub-centimeter location information.


Dubbed "The Socially Mobile", Don Quixote, Don Corleone, Donna E. Mobile, and The Donald were manufactured by IS Robotics and were generously given to the Interaction Lab by Prof. Rodney A. Brooks from the MIT AI Lab. Each R2e robot is a holonomic base actuated with two drive motors and equipped with a two-fingered gripper. The sensors include piezo-electric bump sensors on the inside panels and in the gripper, five infra red (IR) sensors around the body and one on each finger, a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, a base station, and an ultrasound triangulation system for positioning. The robots are programmed in the Behavior Language based on the Subsumption Architecture. The Socially Mobile have participated in various experiments on group behavior and multi-robot learning, and have provided data for developing analysis and modeling methods. (Mataric´, Goldberg)

The lab is equipped with 4 small, networked K-team Khepera robots. Each module contains a dedicated 68331 processor, and is 3 inches in diameter, equipped with a gripper turret. The height is modular, based on what sensory systems are used. The sensory capabilities include position encoders in the wheels and 8 infra-red emitter detector pairs for obstacle avoidance. A radio turret enables communication over UHF frequencies at 4800 Kbits/sec.

These robots have been used for various projects. In one, named SKITs (Sub-Kilogram Intelligent Telerobots), they were used to model small space robots accessing untapped resources of asteroids. Hardware and software simulation studies were performed determine the relative merits of using colonies of sub-kilogram robots for this endevour. In another, the robots were used to study flexible, adaptive coordination of robot formations. In a third, a pair of Kheperas is used to cooperatively move an object.


MENO is a 12 DOF walking robot built and programmed by Gaurav Sukhatme, Scott Cozy and Scott Brizius. The detailed design and control methodology is the subject of a paper that appeared recently. Each leg is a rotational, rotational, prismatic structure actuated by three servos. The robot has several simple sensors. Each foot has a contact sensor to detect ground contact, a switch that detects hyper-retraction of the prismatic joint and a switch ring that detects contact with obstacles when the leg is swung. The robot uses an inclinometer to maintain balance and walks in a variety of statically stable gaits. A frontal sonar is used to detect obstacles. All processing is done onboard using a Motorola 68332 processor programmed in C. The robot was used as a testbed for experiments in mobility benchmarking experiments on a project funded by the JPL Mars explorer program. The robot will be used in the future for experiments in cerebellar control.(MARS Team)

The Nerd Herd is a gang of 20 R1 robots manufactured by ISX/IS Robotics. They are Ackerman steered bases about 13" long with "forks" that can be used to pick up and stack objects. They have IR and contact sensors on the ends of the forks, bump sensors on the sides and back, and a radio-sonar positioning and communication system. They are controlled by a network of 68HC11 processors programmed in the Behavior Language. These robots were the first to demonstrate large scale group behavior (foraging, following, flocking, etc. with up to 13 robots); now they are in need of some body-building, and are largely used as mobile/dynamic obstacles and targets in other multi-robot control and coordination experiments. (Mataric, Werger)
Robot Soccer

Robot soccer competition provides an excellent opportunity for robotics research. In particular, robot players in a soccer game must perform real-time visual recognition, navigate in a dynamic field, track moving objects, collaborate with teammates, and hit the ball in the correct direction. All these tasks demand robots that are autonomous (sensing, thinking, and acting as independent creatures), efficient (functioning under time and resource constraints), cooperative (collaborating with each other to accomplish tasks that are beyond individual's capabilities), and intelligent (reasoning and planing actions and perhaps learning from experience). Furthermore, all these capabilities must be integrated into a single and complete system. To build such integrated robots, we should use different approaches from those employed in separate research disciplines. This paper describes our experience (problems and solutions) in this aspect for building soccer robots. Our robots share the same general architecture and basic hardware, but they have integrated abilities to play different roles (goal-keeper, defender or forward) and utilize different strategies in their behavior. Our philosophy in building these robots is to use the least possible sophistication to make them as robust as possible. In the 1997 RoboCup competition, these robots, formed a team called Dreamteam , played well and won the world championship in the middle-sized robot league. (Shen)

The YODA project consists of a group of young researchers who share a passion for autonomous systems that can bootstrap their knowledge of real environments by exploration, experimentation, learning, and discovery. Our goal is to create a mobile agent that can autonomously learn from its environment based on its own actions, percepts, and missions.(Shen)

An adapted radio-controlled car called Marvin has been developed to study reinforcement learning for navigation in a laboratory environment. (Hoff)

Rodney is a six-legged walking machine. This platform has been used to explore the use of Genetic Algorithms for the development of neural oscillator circuits that can produce walking movements. A primary contribution of this work is the idea applying staged learning as a way of attacking the high-dimensionality of the search space.
Belgrade Hand

Multifingered robot hands have been developed as an attempt to mimic human hand functionality. One such hand has been developed at the University of Southern California and the University of Novi-Sad at Belgrade. It is called the Belgrade-USC Hand and is a five-fingered robot hand. While the hand has five digits only four degrees of freedom are required. A rocker arm mechanism couples two pairs of fingers mechanically. The hand is controlled using a simple PD strategy from a 332 micro-controller. (McHenry)
Puma 560

Until recently, we had several Puma 560s which were produced by Unimation. This picture is actually of a robot at UIUC (but they all look the same!). Our Puma setup included a SunVideo frame grabber and a Lord force sensor. We typically use the RCCL programming language. Recently we used the Pumas for research in force control, motion planning, and even a web interface to a museum statue. (McHenry)