21
21
21
Danilo Navarro
Universidad de Oriente
Departamento de Ingeniería Eléctrica
Barcelona - Venezuela
dnavarro@cantv.net
Gines Benet & Milagros Martínez
Universidad Politecnica de Valencia
Departamento de Informática, Sistemas y Computadores
Valencia - España
gbenet,mimar@disca.upv.es
CR W T T
z = Jz CzW Jz + JxW PJxW (5)
, where P is the robot localization error covariance ma-
trix.
e−2α1 r −(4θ/θ0 )2
x cos α + y sin α − ρ = 0 (1) A = A0 CrN e (6)
2r
In model z = [α, ρ]T ,which is called Hessian line where A is the echo peak amplitude obtained in the ul-
model, ρ is always positive. However there is an alterna- trasonic receiver, A0 is a constant for the transducer, α1
tive z = [α + π, −ρ]T , that results in the same parametric is the air attenuation coefficient, r is the distance between
equation except that ρ reverses its sign. This fact because the transducers pair and the reflecting surface, and Cr is
a line may be detected from different sides about it (figure the feature reflection coefficient, which ranges between 0
1). and 1. Also, N is a parameter that depends on reflector’s
Usually, the whole set of features describing the envi- shape. It can take two values: 1 for wall case, 2 for cor-
ronment are stored in a map representation related to the ner case. This stands for the number of echo reflections
world coordinate frame, but in localization processes, be- before reaching the receiver.
cause of feature matching, they need to be transformed to On the other hand, we know that a maximum amplitude
the robot frame. In [1]and [9] there is an extended devel- echo arrives when ultrasonic beam comes into contact
opment about this topic, so we only reproduce key rela- with a reflector surface in perpendicular fashion. Thus,
tions for the infinite line frame transformation. with θ = 0, we can solve for N as
Given a line zW = [αW , ρW ]T in the world coordi-
nate frame {W}, a robot at pose xW = [x, y, θ]T with ln (2Ar/A0 ) + 2α1 r
N= (7)
coordinate system {R} and a sensor with attached coor- ln Cr
dinate system {S}, then if we let coincide robot and sen- A results N = 1 means the target feature is a wall. On
sor frame, i.e. {S} = {R}, thetransformation equations the other hand, N = 2 means the target feature is a cor-
zR = [αR , ρR ]T = h zW , xW are ner. Experimental data has shown algorithm effectiveness
between 80% y 90% for wall extraction, and about 80%
αW − θ
zR = (2) for corner extraction.
ρW − x cos αW − y sin αW
2.3 Sensor error modelling
, and the two Jacobian for this transformation are given
by Our sonar provides geometric and qualitative informa-
tion of the environment around it. Thus, in the corner case
∂h zW , xW
the sensor provides a feature described as z = [x, y, id]T ,
JzW = (3) while in the wall case the sensor provides a feature de-
∂zW
scription like z = [ϕ, r, id]T . The x and y variables
∂h zW , xW
stand for the Cartesian coordinates related to sensor frame,
JxW = (4) whereas the variables ϕ and r stand for azimuth angle and
∂xW radial distance in a polar coordinates system related to
If CW
z is the symmetric covariance matrix of the line sensor frame. The id variable contains an identifier that
parameters, then the expression for the line covariance in classifies the feature as a corner or as a wall. In this paper
897
2
we treat only the line feature, so we will develop only the 5. State correction: Kalman filter computes the best es-
error covariance matrix model for this kind of feature. timate of the robot’s position x̂(k + 1/k + 1) based
Let Ξ be a measurements set such as, on position prediction x̂(k + 1/k) and on the vector
of innovations γ(k + 1) = zi (k + 1) − ẑj (k + 1).
T The straightforward formulation can be found out in
Ξ = [{ϕ1 , ϕ2 , . . . , ϕn } , {r1 , r2 , . . . , rn }] (8) [8].
If we asume Ξ is a random variable with Gaussian dis-
tribution, then we can represent it with its mean and vari- 3 Experiments
ance according to
The experiments were conducted using the experimen-
T
Ξ = N [ϕ̂,r̂] , R (9) tal platform YAIR (Yet Another Intelligent Robot). YAIR
is a multi-sensor platform prototype of differential-drive
where ϕ̂ and r̂ are the expected value for the heading autonomous robot that was developed in our laboratory
and magnitude of the vector that extends from the sensor as a test bed for the experimental study of reactive sys-
frame origin to the detected line, and R is the covariance tem, sensor fusion, and distributed computing. It has suf-
error matrix given as ficient sensors to handle partially structured or unknown
2 environment, and can move indoors with two independent
σϕ σϕr
R= (10) DC motors driving the robot. YAIR’s multi-sensorial ar-
σϕr σr2
chitecture is conformed by a rotating sonar, an infrared
The values for σϕ2 and σr2 have been obtained off-line based distance sensor, an electronic compass, an odo-
by maximum likelihood estimation over the experimental metric system based on optical encoders rotating syn-
data collected during a robot exploration. As we expected, chronously with the drive wheels, as well as an odometry
the variables are uncorrelated so we can model the sensor system based on a pair of unloaded independent encoder
error covariance as wheels made as sharp-edged as possible, to reduce wheel-
o π base uncertainty[3]. In order to demonstrate the useful-
3 180 0 ness of the ultrasonic rotary sensor on the robot localiza-
R= (11)
0 0.03 m tion process, we set the robot moves throughout a long
corridor as depicted in figure 2. In order to complete a
2.4 Line Based EKF localization run that is about 150 meter long, the robot went twice
We assume there is a perfectly known map ZW = {zj } throughout 51 steps in a stop-and-go mode. Actual robot
with the set of features describing an indoor environment. localization has been hand-measured at each step, as well
Thus, given a set of possible features, the Kalman filter as the robot has recorded the estimated state for future er-
is used to fuse the distance estimated from each feature ror comparisons.
to a matching object in the map. Given an initial robot
state xk and its error covariance matrix Pk , the localiza-
4 Results and discussion
tion process can be described with the following steps:
1. State prediction: The state x̂(k + 1/k) and its covari- Figure 2 shows the run that was performed without set-
ance P̂(k + 1/k) is determined from the odometry ting the line based localization algorithm. For position es-
robot motion model[7]. timation purposes, the robot used the odometric system
only. As expected during navigation, uncertainty grew
2. Observation: At the current pose, the robot inquires boundless with emphasis in the transverse direction of the
its sensors and a set of features ZR = {zi (k + 1)} is robot movement. On the other hand, figure 3 shows the
extracted.. same test path as showed in figure 2 but with the line-
based localization algorithm in action. This time the local-
3. Measurement prediction: Based on the predicted po-
ization system was able to enhanced precision and bound
sition on the map, the robot generates a measurement
uncertainty to an uncritical extent. Moreover, figure 3
prediction ẐR = {ẑj (k + 1)} which identifies the
shows that the estimated path is just like the actual robot
features that the robot expects to find, and the posi-
path. This fact proves that the robot is full-time localized.
tions of those features related to the sensor frame.
In figure 3, detected lines are depicted in thick-black .
4. Feature matching: Here the robot identifies the best We can say that corridor would be a well conditioned sce-
paring between the features ZR actually extracted nario in the sense that there are many line structures which
during observation and the expected features ẐR due allow for robot position updates and overall localization.
to measurement prediction. This processes is named The comparison of the robot position acurracy at end
data association. It is the problem of finding out point using EKF line-based localization and odometric-
which observe feature zi (k + 1) belongs to which only localization is showed in table 1. As we can see,
predicted feature ẑj (k + 1). A widely method used the first one reduces significantly the error which was in
is the validation gates. subcentimeter range.
898
3
Figure 2. Corridor layout and experimental path