UPM-SPQR Rescue Virtual Robots Team Description Paper
UPM-SPQR Rescue Virtual Robots Team Description Paper
UPM-SPQR Rescue Virtual Robots Team Description Paper
⋆ †
Department of System and Computer Science Intelligent Control Group
Sapienza Università di Roma Universidad Politécnica de Madrid
Rome, Italy Madrid, Spain
1 Introduction
SPQR is the group of the Department of Computer and Systems Science at
Sapienza University of Rome in Italy, that has been involved in RoboCup compe-
titions since 1998 in different leagues (Middle-size 1998-2002, Four-legged since
2000, Real-rescue-robots 2003-2006, Virtual-rescue since 2006 and @Home in
2006). In 2007 the SPQR team got the third place in RoboCup Rescue Virtual
Robots League in Atlanta (USA). All the research activities are carried out at
the SIED Laboratory1 , which stands for ”Intelligent Systems for Emergencies
and Civil defense”.
The UPM team is composed of people belonging to the Intelligent Control
Group2 . The Intelligent Control Group is member of the Spanish Committee of
Automation (CEA). Its research fields in mobile robotics include service robots,
focusing on feature-based SLAM, autonomous navigation, and human-like be-
haviors. This is the first year the Intelligent Control Group will participate in
Robocup, contributing with its research to the already developed software of the
SPQR team.
The team’s members are composed by Prof. Daniele Nardi and Prof. Fer-
nando Matı́a as advisors, Daniele Calisi as team leader, Paloma de la Puente,
Diego Rodriguez-Losada, and Alberto Valero.
1
http://sied.dis.uniroma1.it
2
http://www.intelligentcontrol.es/
In this paper we describe the technical characteristics and capabilities of
the Rescue Robot prepared by the UPM-SPQR Rescue Virtual Robots Team
for Robocup Rescue 2009 competitions in Austria (Graz). In the rest of the
document, we will describe in the next two sections the system characteristics,
focusing on the new HRI system we have developed, as well as the software
architecture, based on our OpenRDK development framework. The following
sections deal with the implemented exploration and mapping techniques, sensors
equipment used in USARSim and finally some applications in real contexts.
We will participate with a heterogeneous robotic team (Figure 1), which is com-
posed of:
– three ground robots P2AT, equipped with a fixed SICK Laser Range Finder
and an Hokuyo Laser Range Finder with a tilting platform (SIED), or a
SICK mounted on a PowerCube Pan Tilt device (UPM group).
– one unmanned Aerial Vehicle (UAV) with an INS sensor and GPS sensor and
the USARSim Victim Sensor. In the competition, due to the restrictions on
the battery life of the League, we will use sonar for obstacle avoidance.
This architecture allows both a Behavioral and Supervisory control. In the Be-
havioral Control, the operator defines the operations of the robot by sending
commands. On the converse, in the Supervisory Control the robot works in
Full Autonomy; if one of the layers fails, a failure message will be sent to the
operator, that will be able to overload this precise layer.
FAILURE/STATUS
MESSAGES COMMANDS
Operator
Exploration
Path planning
Navigation
Motion
3 Human-Robot Interface
In our research we have been concentrating our efforts in developing a new HRI
system able to manage the multi-robot-multi-user paradigm, and therefore to im-
prove both the speed and quality of the operator’s problem-solving performance
3
http://openrdk.sourceforge.net/
and to improve efficiency by reducing the need for supervision. Our desktop in-
In the Complex Display (Figure 3(a)) there are two main panels:
Navigation Panel. The navigation panel consists of three displays: a Local
View of the Map, a Global View of the Map giving a bird’s eye view of the
zone, and a pseudo-3D View giving a first person view. The robot is located
within the map by a rectangle-symbol containing a solid triangle that indicates
its direction. The 3D Viewer gives an egocentric perspective of the scenario by
simply elevating the obstacles into 3D images.
Autonomy Levels Panel. It allows the operator to switch among four
control modes: tele-operation, safe tele-operation, shared control and autonomy.
In the safe tele-operation mode the system prevents the robot from colliding
with obstacles, limiting the speed. In the shared control mode the operator sets
a target point for the robot by directly clicking on the map, which the robot
tries to reach.
The Team View Display (Figure 3(b)) includes a comprehensive view of all
robots, providing an aerial point of view. This view allows the operator to su-
pervise the team operations and send commands to each single robot. This view
can be zoomed in and out.
3.3 Pseudo 3D Display
The pseudo 3D display is equivalent to the one shown in the Complex Display.
It is specially useful when there are big errors on the calculated map. At this
moment the video feed-back becomes the main source of information, while the
maps are practically useless. This view shows the video retrieved from the camera
and the obstacles read by the laser range finder.
6 Innovations
This year two main innovations will be implemented: 3D Mapping and Commu-
nications Management.
6.1 3D Mapping
The 3D mapping strategy that we plan to use for the competition is a feature-
based maximum probability algorithm. The segmentation process is carried out
employing a combination of computer vision techniques that offer remarkable
advantages [4]. Our idea is to create a 2D projection of the 3D map so that
holes, stairs and whatever obstacles, detected on the ground or above, that may
interfere with the robot navigation, can be avoided. Semantic information about
the aforementioned objects might eventually be added into the competition final
report. To collect the 3D data, the PM group has got a P3AT robotic platform
equipped with a pan-tilt unit with a range scanner laser SICK LMS200 mounted
on top (Figure 4(a)). Our virtual robot will instead use a horizontally positioned
SICK laser to perform 2D SLAM and an additional, nodding, Hokuyo laser on
top of a pan-tilt wrist to obtain the 3D data. Figure 4(b) shows an example of
a segmented 3D point cloud.
8 Conclusion
Among the future tasks that we have been thinking of we are focusing on the
integration management into the operator GUI. We are also working on a full
3D localization and mapping system, so that irregular terrains, can be more
precisely mapped. As for the UAVs, we have been analysing scenarios where
UGVs have the full equipment for victim recognition, while the aerial vehicle
just a partial one. From last year experience the interface has been improved,
providing a better integration of the operator with the robot software. One
main objective as future work is to improve our map merging subsystem using
a partially distributed algorithm, rather than a centralized one.
References
1. D. Calisi, A. Censi, L. Iocchi, and D. Nardi. OpenRDK: a modular framework
for robotic software development. In Proc. of Int. Conf. on Intelligent Robots and
Systems (IROS), pages 1872–1877, September 2008.
2. D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi. Autonomous navigation and ex-
ploration in a rescue environment. In Proceedings of IEEE International Workshop
on Safety, Security and Rescue Robotics (SSRR), pages 54–59, Kobe, Japan, June
2005. ISBN: 0-7803-8946-8.
3. D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi. Multi-objective exploration and
search for autonomous rescue robots. Journal of Field Robotics, Special Issue on
Quantitative Performance Evaluation of Robotic and Intelligent Systems, 24:763–
777, August - September 2007.
4. Paloma de la Puente, Diego Rodrı́guez-Losada, Raul López, and Fernando Matı́a.
Extraction of geometrical features in 3d environments for service robotic appli-
cations. In Hybrid Artificial Intelligence Systems, Third International Workshop,
HAIS 2008, Burgos, Spain, pages 441–450, 2008.