21

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Line Based Robot Localization Using a Rotary Sonar

Danilo Navarro
Universidad de Oriente
Departamento de Ingeniería Eléctrica
Barcelona - Venezuela
dnavarro@cantv.net
Gines Benet & Milagros Martínez
Universidad Politecnica de Valencia
Departamento de Informática, Sistemas y Computadores
Valencia - España
gbenet,mimar@disca.upv.es

Abstract clearly the most widely employed feature in localization


processes[9, 1]. On the other hand, localization based on
Ultrasound sensors, despite their low cost, are less line feature extracted from sonar data is less common, be-
common than other sensors used for robot localization. cause the lack of reliable information bearing makes for
This is because its main drawback: The lack of reli- difficult data assiciation, accurate feature localization and
able bearing information. We introduce a rotary ultra- classification[6, 11]. In previous works that use sonar for
sonic sensor able to extract features in a 360o - scan and robot localization and target classification, advanced sen-
classify them as wall or corner. For robot localization sor that includes several transducers, or special robot mo-
purposes, these features are matched with a set of line- tion were required[7].
features stored on a priori known map. The localization In this paper, the data provided by an odometric sensor
process is carried out through an Extended Kalman Filter and a rotary sonar, are combined together through an Ex-
that fuses data provided by a dead-reckoning system with tended Kalman Filter(EKF) to estimate the robot position.
measurements data provided by the rotary sonar. This pa- We propose to use a cheap rotary sonar able to extract fea-
per also presents techniques we use to classify the detected tures in one 360o scanning and classify them as a wall or a
features. The experimental test carried out in a long cor- corner[2]. The sensor we already mentioned and the tech-
ridor shows the usefulness of the rotary sensor for fea- nique to extract features using this sensor, are introduced
ture extraction and robot localization process. The results in section 2.2. In section 2.3 we develop the stochastic
show that localization error is in subcentimeter range. measurement model for feature extraction, which is re-
quired in the filtering process. The remaining method-
ological sections describes details about EKF localization
1 Introduction algorithm. The experimental test performed and the re-
sults are discussed on sections 3and 4 respectively.
Two different kinds of localization exits : relative
and absolute. The first one, known as Dead-reckoning,
is carried out using measurements provided by a sen-
2 Methods
sor measuring the dynamics of internal variables of the
robot. This technique is impractical since systematic and 2.1 Line-based environment representation
non-systematic measurement errors grow boundless over Line-based maps are well suited for indoor applica-
time[4]. Thus, in order to stay localized, the robot has to tions, or structured outdoor applications, where straight-
externally reference itself using exteroceptive sensors, as edge objects form many of the environmental features. An
ultrasound sensors, laser range scanners, or cameras. infinite line is a simple feature modelled as z = [α, ρ]T ,
A current trend in robot localization is based on fea- where α and ρ represent the heading and the magnitude of
ture maps that include lines and corners as landmarks. the vector that extends from the origin to the line, and is
There are many line-based approaches to localization, perpendicular to the line(figure 1). Thus α defines the ori-
but they usually need precise feature extraction meth- entation of the line z, while ρ defines the normal distance
ods, and sensors providing dense raw data[5, 10]. Nowa- to the world coordinate frame origin.
days, line segments extracted from laser range data are The parametric model for infinite lines is
1-4244-0826-1/07/$20.00 © 2007 IEEE 896
the robot frame {R}, using a first order error propagation,
is

CR W T T
z = Jz CzW Jz + JxW PJxW (5)
, where P is the robot localization error covariance ma-
trix.

2.2 Feature extraction


We use a rotary ultrasonic sensor, able to distinguish
between walls and corners based on the time on flight
and the amplitude of the ultrasonic echo. The sensor
has a transducers array Tx/Rx that rotates driven by a 1.8
degree-step stepper motor. At each angular position, the
emitter sends a train of 16 ultrasonic pulses. Soon after-
wards, the receiver recovers the echo amplitude informa-
tion by mean of a coherent demodulation process. Next,
time of flight and relative magnitude of each echo is ex-
Figure 1. Examples of infinite line feature tracted.
definition The strategy to extract features from sonar data is based
on amplitude of echoes received from the environment [2].
The echo amplitude model is

e−2α1 r −(4θ/θ0 )2
x cos α + y sin α − ρ = 0 (1) A = A0 CrN e (6)
2r
In model z = [α, ρ]T ,which is called Hessian line where A is the echo peak amplitude obtained in the ul-
model, ρ is always positive. However there is an alterna- trasonic receiver, A0 is a constant for the transducer, α1
tive z = [α + π, −ρ]T , that results in the same parametric is the air attenuation coefficient, r is the distance between
equation except that ρ reverses its sign. This fact because the transducers pair and the reflecting surface, and Cr is
a line may be detected from different sides about it (figure the feature reflection coefficient, which ranges between 0
1). and 1. Also, N is a parameter that depends on reflector’s
Usually, the whole set of features describing the envi- shape. It can take two values: 1 for wall case, 2 for cor-
ronment are stored in a map representation related to the ner case. This stands for the number of echo reflections
world coordinate frame, but in localization processes, be- before reaching the receiver.
cause of feature matching, they need to be transformed to On the other hand, we know that a maximum amplitude
the robot frame. In [1]and [9] there is an extended devel- echo arrives when ultrasonic beam comes into contact
opment about this topic, so we only reproduce key rela- with a reflector surface in perpendicular fashion. Thus,
tions for the infinite line frame transformation. with θ = 0, we can solve for N as
Given a line zW = [αW , ρW ]T in the world coordi-
nate frame {W}, a robot at pose xW = [x, y, θ]T with ln (2Ar/A0 ) + 2α1 r
N= (7)
coordinate system {R} and a sensor with attached coor- ln Cr
dinate system {S}, then if we let coincide robot and sen- A results N = 1 means the target feature is a wall. On
sor frame, i.e. {S} = {R}, thetransformation equations the other hand, N = 2 means the target feature is a cor-
zR = [αR , ρR ]T = h zW , xW are ner. Experimental data has shown algorithm effectiveness
between 80% y 90% for wall extraction, and about 80%
αW − θ
 
zR = (2) for corner extraction.
ρW − x cos αW − y sin αW
2.3 Sensor error modelling
, and the two Jacobian for this transformation are given
by Our sonar provides geometric and qualitative informa-
tion of the environment around it. Thus, in the corner case
∂h zW , xW
 the sensor provides a feature described as z = [x, y, id]T ,
JzW = (3) while in the wall case the sensor provides a feature de-
∂zW
scription like z = [ϕ, r, id]T . The x and y variables
∂h zW , xW
 stand for the Cartesian coordinates related to sensor frame,
JxW = (4) whereas the variables ϕ and r stand for azimuth angle and
∂xW radial distance in a polar coordinates system related to
If CW
z is the symmetric covariance matrix of the line sensor frame. The id variable contains an identifier that
parameters, then the expression for the line covariance in classifies the feature as a corner or as a wall. In this paper
897
2
we treat only the line feature, so we will develop only the 5. State correction: Kalman filter computes the best es-
error covariance matrix model for this kind of feature. timate of the robot’s position x̂(k + 1/k + 1) based
Let Ξ be a measurements set such as, on position prediction x̂(k + 1/k) and on the vector
of innovations γ(k + 1) = zi (k + 1) − ẑj (k + 1).
T The straightforward formulation can be found out in
Ξ = [{ϕ1 , ϕ2 , . . . , ϕn } , {r1 , r2 , . . . , rn }] (8) [8].
If we asume Ξ is a random variable with Gaussian dis-
tribution, then we can represent it with its mean and vari- 3 Experiments
ance according to
  The experiments were conducted using the experimen-
T
Ξ = N [ϕ̂,r̂] , R (9) tal platform YAIR (Yet Another Intelligent Robot). YAIR
is a multi-sensor platform prototype of differential-drive
where ϕ̂ and r̂ are the expected value for the heading autonomous robot that was developed in our laboratory
and magnitude of the vector that extends from the sensor as a test bed for the experimental study of reactive sys-
frame origin to the detected line, and R is the covariance tem, sensor fusion, and distributed computing. It has suf-
error matrix given as ficient sensors to handle partially structured or unknown
 2  environment, and can move indoors with two independent
σϕ σϕr
R= (10) DC motors driving the robot. YAIR’s multi-sensorial ar-
σϕr σr2
chitecture is conformed by a rotating sonar, an infrared
The values for σϕ2 and σr2 have been obtained off-line based distance sensor, an electronic compass, an odo-
by maximum likelihood estimation over the experimental metric system based on optical encoders rotating syn-
data collected during a robot exploration. As we expected, chronously with the drive wheels, as well as an odometry
the variables are uncorrelated so we can model the sensor system based on a pair of unloaded independent encoder
error covariance as wheels made as sharp-edged as possible, to reduce wheel-
 o π  base uncertainty[3]. In order to demonstrate the useful-
3 180 0 ness of the ultrasonic rotary sensor on the robot localiza-
R= (11)
0 0.03 m tion process, we set the robot moves throughout a long
corridor as depicted in figure 2. In order to complete a
2.4 Line Based EKF localization run that is about 150 meter long, the robot went twice
We assume there is a perfectly known map ZW = {zj } throughout 51 steps in a stop-and-go mode. Actual robot
with the set of features describing an indoor environment. localization has been hand-measured at each step, as well
Thus, given a set of possible features, the Kalman filter as the robot has recorded the estimated state for future er-
is used to fuse the distance estimated from each feature ror comparisons.
to a matching object in the map. Given an initial robot
state xk and its error covariance matrix Pk , the localiza-
4 Results and discussion
tion process can be described with the following steps:

1. State prediction: The state x̂(k + 1/k) and its covari- Figure 2 shows the run that was performed without set-
ance P̂(k + 1/k) is determined from the odometry ting the line based localization algorithm. For position es-
robot motion model[7]. timation purposes, the robot used the odometric system
only. As expected during navigation, uncertainty grew
2. Observation: At the current pose, the robot inquires boundless with emphasis in the transverse direction of the
its sensors and a set of features ZR = {zi (k + 1)} is robot movement. On the other hand, figure 3 shows the
extracted.. same test path as showed in figure 2 but with the line-
based localization algorithm in action. This time the local-
3. Measurement prediction: Based on the predicted po-
ization system was able to enhanced precision and bound
sition on the map, the robot generates a measurement
uncertainty to an uncritical extent. Moreover, figure 3
prediction ẐR = {ẑj (k + 1)} which identifies the
shows that the estimated path is just like the actual robot
features that the robot expects to find, and the posi-
path. This fact proves that the robot is full-time localized.
tions of those features related to the sensor frame.
In figure 3, detected lines are depicted in thick-black .
4. Feature matching: Here the robot identifies the best We can say that corridor would be a well conditioned sce-
paring between the features ZR actually extracted nario in the sense that there are many line structures which
during observation and the expected features ẐR due allow for robot position updates and overall localization.
to measurement prediction. This processes is named The comparison of the robot position acurracy at end
data association. It is the problem of finding out point using EKF line-based localization and odometric-
which observe feature zi (k + 1) belongs to which only localization is showed in table 1. As we can see,
predicted feature ẑj (k + 1). A widely method used the first one reduces significantly the error which was in
is the validation gates. subcentimeter range.
898

3
Figure 2. Corridor layout and experimental path

Figure 3. Position correction using line-based EKF

Odometric-Only Line-based EKF ultrasonic echoes. Robotic and Autonomous Systems,


εx 0.631 m 0.035 m 50:13–25, 2005.
εy 2.606 m 0.053 m [3] F. Blanes. Percepción y representación del entorno en
robótica móvil. PhD thesis, Universidad Politécnica de
εθ 14.06o 0.85o
Valencia, Valencia.España, 2000.
[4] J. Borenstein and L. Feng. Measurement and correction of
Table 1. Comparation of the robot position systematic odometry errors in mobile robots. IEEE Trans-
acurracy at end point actions on Robotic and Automation, 12:869–880, 1996.
[5] G. Borges and M. Aldon. A split-and-merge segmen-
tation algoritm for line extraction en 2d range images.
In 15th International Conference on Pattern Recognition,
5 Conclusions and future work Barcelona, Spain, 2000.
[6] L. Kleeman. Advanced sonar and odometry error mod-
In general it was shown that line-based localization al- eling for simultaneous localisation and map building. In
lows for a very precise localization and a high degree of IEEE/RSJ International Conference on Intelligent Robots
practicality. However, dead-reckoning localization must and Systems., volume 1, pages 699–704, Las vegas, Oct
2003.
be kept as accurate as posible because when the odomet-
[7] J. Leonard and H. Durrant-Whyte. Mobile robot localiza-
ric error exceed expectations the data association can fail. tion by tracking geometric beacons. IEEE Transactions on
Furthermore, the world to robot transformation depends Robotics and Automation, 7(3):376–382, June 1991.
on robot state, so inaccurate and uncertain position esti- [8] P. S. Maybeck. Stochastic models, estimation, and control,
mates lead to false matches and incorrect pose updates. volume 141. Academic Press, 1979.
In order to implement simultaneous localization and map [9] S. Pfister. Algoritms for Mobile Robot Localization and
building process (SLAM), in future works we will focus Mapping, Incorporating Detailed Noise Modeling and
on developing techniques for accurate map construction Multi-Scale Feature Extraction. PhD thesis, California In-
from sonar data. stitute of Technology, Pasadena, California, 2006.
[10] S. Roumeliotis and G. Bekey. Segments: A layered, dual
kalman filter algorithm for indoor feature extration. In
References IEEE Internatinal Conference on Robotics and Automa-
tion, pages 454–461, Takmatsu, Japan, 2000.
[1] K. O. Arras. Feature-Based Robot Navigation in Known [11] O. Wijk and H. Christensen. Triangulation-based fusion of
and Unknown Environments. PhD thesis, École Polytech- sonar data with application in robot pose tracking. IEEE
nique Fédérale de Lausanne, Switzerland, 2003. Transactions on Robotics and Automation, 16(6):740 –
752, Dec 2000.
[2] G. Benet, M. Martínez, F. Blanes, P. Pérez, and J. Simó.
Differentiating wall from corners using the amplitude of 899

View publication stats 4

You might also like