Lab tours are a big part of the operation of the Interaction Lab. We are one of the only labs in this department that has the potential to frequently and currently show interesting demos of what we are doing. However, demos are pretty hard to keep at the ready all the time, and operators need to be present in order to make the demos work. Also, robot hardware is in demand by both labs and not always available. Thus, a permanent demo robot is needed. If it can be oriented toward a tour environment, then it will be assured to be interesting to visitors. A greeter robot will address the need for a permanent demo robot while also reliving much of the burden placed on grad students in the lab with giving tours.
The aim of the tour guide project is to develop a technology able to offer a tour to different visitors at any moment. A robot developed in the Interaction Lab will be in charge of welcome visitors in the lab and give them a tour and explanations about what research work is being carried out in the lab by the students. This way the possibility of offering a tour will be always ready and it will be a great platform to keep the visits and tours standard and updated.
The Interaction Lab is a relatively constant environment. Walls don't tend to move and cabinets stay in relatively the same place. However new obstacles can appear or disappear as robots or chairs or even the robot pen is moved around the room. So, a tour guide robot will be a way to get a robot out of the pen, as to have unstructured elements to the environments, but not too far, so that there is a largely static structure to the environment. Also, since the robot would be geared toward greeting guests and a tour of the lab, constraints such as badges for lab members, and identifying posters on a wall, or name-tags on a desk will not be too constraining to the environment as to seem contrived.
One of the main features the tour guide robot should have is the navigation. It should be able to navigate around the lab showing the visitors the most interesting projects and people working there. To do that it use a blob recognition system. A set of symbols are installed in the lab ceiling. Using a camera in the robot looking at this ceiling, it captures the symbols over it. These symbols are used for two purposes: first of all, read instructions about what the robot should do at any given moment. There are two different symbols, one to indicate the robot should keep going, and the other to indicate the robot should stop and give a speech or a presentation. This last point coincides with the places in the lab where other projects are being developed or researchers are doing their work.
The second purpose of these ceiling symbols is to guide the navigation. All these symbols are aligned. The robot is programmed to follow that line. We could think about it like the rails where a train moves. It detects the aligned symbols using its camera and this line should be detected by the center of the image. If this line has any slope, the followed trajectory should be modified so the robot will be always aligned under that symbols line. To achieve this feature, images are taken periodically and the symbols detected on it are analyzed. Depending on these data, the trajectory is corrected or a decision, like stop and give a speech, is taken.
The interaction lab is a relatively structured environment. However new obstacles can appear and disappear, like chairs or other robots moving. Also a visitor can get in the robot way at any moment. To avoid the robot hitting any of these obstacles or visitors, a collision avoidance system has also been developed. Thanks to the sonar sensors belt installed in the robot, it takes measures periodically to make sure there are no obstacles in the trajectory. If the robot detects anything in front of him, it reduces the speed. It will move slower the closer it finds the obstacle. If the obstacle remains in the robot path, when it is close enough, the robot stops and waits for the area to get clear. This way it avoids possible changes in the environment and collisions with other objects or even with a visitor.
Whenever the tour guide reach a stop point, it means an interesting point in the lab where a student is doing some research or another project is being developed, it stops and give a speech about it. This way the interaction with the visitors is very natural and human, where a guide is moving around the lab and talking about what is going on there, and the visitors are following it and listening. To achieve that feature, a TTS (Text to Speech) engine has been installed and integrated with the robot software, so it could synthesize any text. This way it’s very easy to maintain the speeches the robot do as new projects or students are getting part of the lab.
On top of the Pioneer mobile base, we mounted Sparky, a robot in collaboration with Disney Imagineering. This is a very expressive robot with the ability to do some gestures moving its arms and hands, legs, eyes, lips, etc. which make it very suitable for this task. It can point while speak about any project or researcher and can make some gestures moving its facial degrees of freedom.
The combination of the robot gestures with speech makes the interaction very flexible and natural.
One of the last features added to the tour guide project is the possibility of connecting remotely with a PC, and display slides in a big screen as the robot gives its speech. This way the slides represent a great complement to the robot’s explanations about the projects and the interaction become more interesting and dynamic. This connection is wireless and the robot manages which and when the slides should be shown in the screen, to accommodate it to its speech.
All these settings, like connection info, speech texts, etc. are stored in a configuration file. When the robot starts it reads this file so it knows what to do and when. This is a very easy way to maintain and keep updated information about the changes in the lab and its students.
Internally it works like a states machine. The robot has a concrete set of states, like keep moving, give a speech, connect with another PC, etc. and depending on the current state and its information coming from the sensors, the robot make a decision and transit to the following state.