0% found this document useful (0 votes)
109 views9 pages

Kalman 3. Robot Paltill

This document describes a method for a mobile robot to track and capture a moving object using camera images. It proposes using an active camera system mounted on the robot to estimate the object's position based on camera kinematics and image processing. A Kalman filter is used to compensate for uncertainties in position estimates from treating the object as a point. The robot's velocities are estimated to plan the shortest path to capture the moving object. Experimental results demonstrate the robot successfully tracking and capturing a target object.

Uploaded by

Trung Kiên
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views9 pages

Kalman 3. Robot Paltill

This document describes a method for a mobile robot to track and capture a moving object using camera images. It proposes using an active camera system mounted on the robot to estimate the object's position based on camera kinematics and image processing. A Kalman filter is used to compensate for uncertainties in position estimates from treating the object as a point. The robot's velocities are estimated to plan the shortest path to capture the moving object. Experimental results demonstrate the robot successfully tracking and capturing a target object.

Uploaded by

Trung Kiên
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

444 Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee

Implementation of Tracking and Capturing a Moving Object


using a Mobile Robot

Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee

Abstract: A new scheme for a mobile robot to track and capture a moving object using camera
images is proposed. The moving object is assumed to be a point-object and is projected onto an
image plane to form a geometrical constraint equation that provides the position data of the
object based on the kinematics of the active camera. Uncertainties in position estimation caused
by the point-object assumption are compensated for using the Kalman filter. To generate the
shortest time path to capture the moving object, the linear and angular velocities are estimated
and utilized. In this paper, the experimental results of the tracking and capturing of a target
object with the mobile robot are presented.

Keywords: Mobile robot, Kalman filter, tracking & capturing, active camera.

1. INTRODUCTION

Mobile robots have many application fields because
of their high workability [1-6]. They are especially
necessary for tasks that are difficult and dangerous for
men to perform [22]. Many researchers have shown
interest in mobile robots. Most researchers have
focused on successful navigation [18-21], that is, on
reaching a fixed target point safely [7,8,10,12].
However, if a mobile robot is working under water or
in space, the target object may move freely [11,14,22].
Therefore, the ability of a mobile robot to process
moving targets is necessary. If an active camera
system is applied to navigation and the tracking of
moving objects, there will be many advantages
[18,20]. An active camera system capable of panning
and tilting should be able to automatically calibrate
itself and keep track of an object of interest for a
longer time interval without movement of the mobile
robot [1]. There are several approaches [13,15-17]
that can be used to overcome the uncertainties of
measuring the locations of the mobile robot or other
objects.
In this paper, the position of an object was
estimated using the kinematics of an active camera
and images of the object, which was assumed to be
flat and small on the floor its linear and angular
velocities were estimated for the mobile robot, To
predict the future trajectory of the object, which plans
the shortest time path on which to track and capture
the moving object. For a simple example, in a pick
and place operation with a manipulator, the precise
motion estimation of an object on a conveyor belt is a
critical factor for the successful operation of the stable
grasping. A well-structured environment such as a
moving-jig that carries object on conveyor belt and
stops when the manipulator grasps the object obviates
the motion estimation requirement.
However, a well-structured environment limits the
flexibility of a production line, requires skillful
designers for the jig, and high maintenance expense ;
eventually it will disappeared from the automated
production lines.
To overcome these problemsto grasp a moving
object stably without stopping the motion- trajectory
prediction of the moving object on the conveyor belt
is necessary. The manipulator control system needs to
estimate the most accurate position, velocity, and
acceleration at any instance to capture the moving
object safely without collision and to lift up the object
stably without slippage. When the motion trajectory is
not high- random and continuous, it can be model
analytically to predict the near- future values based on
the measured previously data.
A state estimator was designed to overcome the
uncertainties in the image data caused by the point-
object assumption and physical noises, using a
Kalman filter. Based on the estimated velocities of the
object, the attitude of the active camera was controlled
to locate images of the object on the center of the
image frame.
In Section 2, we discuss how to establish a model
__________
Manuscript received February 6, 2003; revised January 5,
2005; accepted June 17, 2005. Recommended by Editorial
Board member In So Kweon under the direction of Editor
Keum-Shik Hong.
Sang-joo Kim and Jang-Myung Lee are with the School of
Electrical Engineering, Pusan National University, San 30
Jangjeon-Dong, Kumjung-ku, Pusan 609-375, Korea. (e-mails:
ksj_elec@hanmail.net, jmlee@pusan.ac.kr).
Jin-woo Park is with the Institute of Information
Technology Assessment (IITA) 52, Eoeun-dong, Yuseong-gu,
Daejeon 305-806, Korea (e-mail: jinu@iita.re.kr).
International Journal of Control, Automation, and Systems, vol. 3, no. 3, pp. 444-452, September 2005
Implementation of Tracking and Capturing a Moving Object using a Mobile Robot 445

of an active camera. Section 3 deals with the problem
of trajectory estimation of a moving object, and
Section 4 deals with the motion planning involved in
capturing a moving object. In Section 5, the
advantages of our proposed method are illustrated
through a simulation and experimental results. Section
6 presents conclusions drawn from this work.

2. ACTIVE CAMERA SYSTEM

In this section, some equations regarding image
processing are derived considering the kinematics of
the actuators of the active camera.

2.1. Kinematics of the actuators of the active camera
system
The active camera system has the ability to pan and
tilt, as shown in Fig. 1. The position and posture of the
camera are defined with respect to the base frame.
According to the Denavit - Hartenberg convention, a
homogeneous matrix can be obtained after
establishing the coordinate system and representing
parameters, as shown in Table 1 and (1).

X
Y
Z
1
l
2
l 3
l
1
x
2
x
3
x
4
x
1
z
2
z
3
z
4
z

CCD
y
CCD
y
( , , )
C ccd ccd ccd
C x y z


Fig. 1. 2-D.O.F Camera platform (Left) and its real
image (Right).

Table 1. DH link parameters.
Link D a
1 0
1
l
0 -90
2 90- 0
2
l
0
3 90 0 0 90
4

0
3
l
0
= =
4
3
3
2
2
1
1
0
4
0
H H H H H (1)

cos( )cos( ) cos( )sin( ) sin( )
sin( ) cos( ) 0
sin( )cos( ) sin( )cos( ) cos( )
0 0 0



sin( ) cos( )cos( )
2 3
sin( )
cos( ) sin( )cos( )
1 2 3
1
l l
l
s
l l l


+
+
(
(
(


In Fig. 1, Cc (x
ccd
, y
ccd
, z
ccd
) represents a position
vector from the center of the mobile robot to the
center of the camera lens. Each component of the
vector can be represented with respect to the tilting
angle, , and the panning angle, , of the CCD
camera, as follows:
2 3
sin( ) cos( ) cos( )
ccd
x l l = + , (2)
3
sin( )
ccd
y l = , (3)
1 2 3
cos( ) sin( ) cos( )
ccd
z l l l = + . (4)
Also, the attitude vector of the homogeneous matrix
represents Roll(
R
), Pitch(
P
) and Yaw(
Y
) angles
by tilting and panning angles of the camera as
follows:
1
2 2 2
sin( )sin( )
tan ( )
cos ( )sin ( ) cos ( )
R

=
+
, (5)
1
2 2 2
sin( ) cos( )
tan ( )
cos ( ) cos ( ) sin ( )
P

=
+
, (6)
Y
= . (7)

2.2. Relation between a camera and real coordinates
To measure the distance from a camera to an object
using the camera images, at least two image frames
that are captured for the same object at different
locations are necessary. Usually, a stereo-camera
system is used to obtain distance information [24].
However, because there exist uncertainties in feature
point matching, this process requires too much time to
be implemented in real-time.
The new approach presented in this paper requires
only a frame to measure the distance to the object
from the CCD camera. Since this approach becomes
possible by assuming that a point-object is located on
the floor, there also exist uncertainties in position
estimation.
To both minimize the uncertainty in position
estimation and to estimate the velocities of the
moving object, a state estimator was based on the
Kalman filter designed.
To render make a real- time tracking and capturing
system, the distances in 3D space are calculated using
an image frame, based on the assumption that objects
446 Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee
are located on a flat floor. Note that since a mobile
robot with a parallel-jaw gripper grasps an object on
the floor, the height of the object is not an important
factor. The image coordinates for the point object, ( j ,
k ), are transformed into image center coordinates that
are orientation invariant relative to the Roll angle in
(6),
R
, and the size of the image frame,
x
P and
y
P ,
( j , k )
cos( ) sin( ) '
2
sin( ) cos( ) '
2
x
R R
y R R
P
j
j
P k
k


(

(
( (
( =
( (
(


(

, (8)
where P
x
and P
y
represent the x- and y- directional
size of the image frame in pixels, respectively.
To estimate the real location, (
0
x ,
0
y ),

0
and
0
r

are estimated using the linear relationship


between the real object range within the view angle
and the image frame. That is, for a given set of (

0
,
0
r

), there is one-to-one correspondence between the


real object point and the image point.
When an point image is captured at ( j , k ) on the
image center frame, the real object position,

0
and

ccd
y
ccd
x
ccd
r

0 x

0
y
0 r

0
Y

Mobile
Robot
X
Y

Fig. 2. Estimation of position information from a
mobile robot.

P

r y

r x
0

0
r

0 0
( , ) x y
0 0
( , ) x y
0
r

c c d
Z
( A )
( B )

Fig. 3. Estimation of
0
r

(A) and
0

(B).
0
r

can be estimated as follows, and as it is illustrated


in Fig. 3:
0
'
cos( )
2
ccd
P ry
y
z
r
k
P

=
+
, (9a)
0
'
rx
x
j
P

= , (9b)
where
rx
and
ry
represent the x- and y-
directional view angles of the CCD camera,
respectively. The position of the object with respect to
the robot coordinates, (x, y), can be estimated using
0

and
0
r

[8] as follows:
0 0 0
cos( ) cos( )
ccd Y Y
x r r

= + + , (10)
0 0 0

sin( ) sin( )
ccd Y Y
y r r

= + + , (11)
where
Y
represents the angle between the robot and
the active camera, and r
ccd
(=
ccd ccd
x y + ) represents
the distance from the robot to the center of the camera.

2.3. Inverse kinematics to place the center of an image
at the desired position
In the case of using an active camera, visual
information on the area to be searched can be obtained
through the inverse kinematics. The inverse
kinematics equations that describe the attitude of the
actuator and are used to place the center of an image
at a desired position, can be derived from (2)-(4) as
follows:
2 2 2 2 2 2
1 2 1 2 1 1 1
2 2
1
( )( )
1
cos ,
sin( )
( )
d d
d
d
d
l l l l l r l r
l r

| |
+ + | |
|
=
|
|
+
\ .
\ .
(12)
1
tan
d
d
d
y
x


| |
=
|
\ .
, (13)
where
d
and
d
are the attitude of the camera,
(x
d
,y
d
) represents the desired position of the camera,
and r
d
is
2 2
d d
x y + . Table 2 shows the parameters for
the camera system, that were used in (12) and (13).

Table 2. Parameters for the active camera system.
1
l 40 cm
2
l 7.5 cm
3
l 4 cm
x
P 320pixel
y
P
240 pixel
rx
50 ry

40
Implementation of Tracking and Capturing a Moving Object using a Mobile Robot 447

3. ACTIVE CAMERA SYSTEM

3.1. Modeling of a moving object
When the velocity and acceleration of the target
object can be estimated, the next target position

( , T )
x y
T
can be predicted as follows [2]:
2
1

2
x t x x x
T T V t A t


+
= + +
, (14)
2
1

2
y t y y y
T T V t A t


+
= + +
, (15)
where t is the sampling time, and
( , ),
x y
T T

( , V )
x y
V

and

( , A )
x y
A
are the current Cartesian coordinate
estimates of the target position, velocity and
acceleration respectively.
The movement of the object can be decomposed
into the linear velocity element and the angular
velocity element, ; X-Y coordinates, as follows [3]:
,
2
1
cos( )
2
1
cos( ) sin( ) ,
2
k t k k k k
k k k k k
x v t t
v t v t



+
= +

(16)
,
2
1
sin( )
2
1
sin( ) cos( ) ,
2
k t k k k k
k k k k k
y v t t
v t v t



+
= +
+
(17)
, k t k k
t


+
= , (18)
, k t k v
v


+
= , (19)
, k t k

+
= , (20)
where v
k
and w
k
are the linear velocity and angular
velocity of the target object, and
v
and

are the
variations of linear velocity and angular velocity,
respectively. From (16)-(20), we can obtain the state
transition matrix, as follows:
, 1 1 1
,
,
k k k k k
k k k k

= +
= +
x x w
Z H x v
(21)
where
k
= x | |
k k k k k
x y v ,
, 1 k k
=
2
1 1 1
2
1 1 1
1
1 0 0 cos( ) sin( )
2
1
0 1 0 sin( ) cos( )
2
0 0 1 0
0 0 0 1 0
0 0 0 0 1
k k k
k k k
t v t
t v t
t

(
(
(
(
(
(
(
(
(

,
1 k
= w
0
0
0
v

(
(
(
(
(
(
(

,
k
= Z
k
k
x
y
(
(

,
k
H =
1 0 0 0 0
0 1 0 0 0
(
(

,
and
k
= v
x
y

(
(

.
Notice that
k
is the state transition matrix, w
k
is
the vector representing process noise, Z
k
is the
measurement vector, H
k
represents the relationship
between the measurement and the state vector, and
x
and
y
are x- and y- directional measurement
errors, respectively.

3.2. State estimation of a moving object based on a
Kalman filter
Input data such as image information include
uncertainties and noises generated during the data
capturing and processing steps. The state transition of
a moving object also includes irregular components.
Therefore, as a robust state estimator against these
irregularities, a Kalman filter was adopted to form a
state observer [11-14]. The Kalman filter minimizes
the estimation error by modifying the state transition
model, based on the error between the estimated
vectors and the measured vectors, with an appropriate
filer gain. The state vector, which consists of the
position on the x-y plane, linear/angular velocities,
and linear/angular accelerations, can be estimated
using the measured vectors representing the position
of a moving object on the image plane.
The covariance matrix of estimated error must be
calculated to determine the filter gain. The projected
estimate of the covariance matrix of estimated error is
represented as
, 1 1 , 1 1
T
k k k k k k k
P P Q

= + , (22)
where
k
P is a zero-mean covariance matrix
representing the prediction error,
k
represents the
system noise,
1 k
P

is the error covariance matrix for
the previous step, and
1 k
Q

represents other
measurement and computational errors . The optimal
filter gain
k
K that minimizes the errors associated
with the updated estimate is
1
[ ]
T T
k k k k k k k
K P H H P H R

= + , (23)
where H
k
is the observation matrix and R
k
is the zero-
mean covariance matrix of the measurement noise.
The estimate of the state vector

k
x from the
measurement Z
k
is expressed as
448 Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee

1 1
, 1 , 1
[ ]
k k k
k k k k k k k
x x K Z H x


= + . (24)
Therefore,

k
x is updated based on the new values
provided by
k
Z . The error covariance matrix that
will be used for the prediction, P
k
, can be updated as
follows [4,9]:
k k k k k
P P K H P = . (25)
After the current time is updated to 1 k + , a new
estimation can be provided using (23) to (26).


(a) Trajectory of moving object.

(b) Estimation |error| along the trajectory.

(c) State estimations,
k
,
k
, and
k
, using a Kalman
filter.
Fig. 4. State estimations using a Kalman filter.
Fig. 4(a) represents the real and an estimated
trajectories of a moving object, and Fig. 4(b)
represents the estimation |error| when the trajectory
was estimated using the Kalman filter. To incorporate
the measurement noise, which is empirically assumed
to be zero-mean, Gaussian random noise with a
variance of 2, the linear and angular velocities of the
object were set as follows:
k
15*(sin(0.02*k) 1) [ cm/sec],
0.7*cos(0.01*k) [ rad/sec],
k v
v


= + +
= +
(26)
where the linear and angular velocities (
v
,
w
) were
assumed to include the Gaussian random noise with
the variance of 3 and 0.1, respectively.
Fig. 4 shows that the Kalman filter estimation of the
states under a noisy environment.

3.3. Trajectory estimation of a moving object
The states of a moving object can be estimated if
the initial state and input are given for the state
transition model. Therefore, the states can be
estimated for the next inputs by estimating the linear
velocity and angular velocity of the moving object
using the Kalman filter as a state estimator.
From the linear velocity/acceleration and rotational
angular velocity/acceleration data, the next states can
be approximated, as in the following first order
equations:
nT a v v
lk k n k
+ =
+
, (27)
nT a
k k n k
+ =
+
. (28)
In Fig. 4(c), the result the noise includes possible
noise since it is a dynamically varying system,
although is surpressed by the Kalman filter. Therefore,
the least square estimation method is utilized, which
has robust anti-noise characteristics [23].
From the estimated inputs and using the state
transition model, the trajectory of a moving object can
be estimated as follows:

0
( ) cos[ ( )]
m
k m
k
h
x x v h h T
+
=
= +

, (29a)

0
( )sin[ ( )]
m
k k m
h
y y v h h T
+
=
= +

, (29b)

( )
k lk
v h v a hT = +

, (29c)
2
1

( )
2
k k k
h hT a hT

= + +
. (29d)

4. MOTION PLANNING FOR CAPTURING

To capture a moving object, the mobile robot needs
to be controlled while considering the relation
between the position of the mobile robot and the
position of the moving object. Fig. 5 shows the
Implementation of Tracking and Capturing a Moving Object using a Mobile Robot 449

motion planning process of a mobile robot for the
process of capturing a moving object.
The mobile robot estimates the position of the
moving object within m sampling time and selects the
shortest distance from its current position to the
moving object, assuming that its location is known a
priori. The localization scheme of the mobile robot
using the information on the moving object, which
improves the accuracy in capturing, was developed in
[14]. The target point of the mobile robot at the k-
sampling time is denoted as

( )
R
x k m + , which is one
of the estimated points of the mobile robot after m
sampling time.

1~
( ) min ( ) ( )
O R R
opt
m M
x k m x k m x k m
=
+ = + + , (30)
where

( )
R
x k m + is the position of the mobile robot
after m sampling time, and given that the mobile robot
moves along the shortest path towards the target point

( )
O
x k m + .
The position of the moving object in the Cartesian
coordinate system is acquired using the relation
between image frames. The linear and angular
velocities of the moving objects are estimated by the
state estimator. The Kalman filter is used as a state
estimator to determine the characteristics of the
robustness of noises and uncertainties included in the
input data.
After estimating the trajectory of the target object,
the optimal trajectory and motion planning of the
mobile robot are determined in order to capture the
target object in the shortest time. Fig. 6 shows the
overall structure of mobile robot control to capture a
target object.

5. SIMULATIONS AND EXPERIMENTS

To demonstrate and illustrate the proposed method,
we present an example. It is assumed that the velocity
limit of a mobile robot is 30 cm/sec and that the
camera is installed on top of the mobile robot. The
initial locations of the mobile robot and the moving
object are (-50, -50 cm) and (-250, 300 cm) in with
respect to the reference frame, respectively. The
velocity and angular velocity of the moving object are
as follows:
30(cos(0.01 ) 1) [ / sec]
k v
v k cm = + + , (31a)
0.7sin(0.03 ) [ / sec]
1.5
k
k rad

= + + . (31b)
The forward direction and rotational angular
velocity of the moving object are Gaussian random
variables with variances of 2 and 0.1, respectively,
which are obtained experimentally. Fig. 7(a) illustrates
the trajectory of a moving object and shows the
mobile robot trying to capture the object by estimating
its trajectory. Fig. 7(b) shows the distance between the
mobile robot and the moving object, the error between
Moving
Object
Capture
Robot

( ) R x k m +

( 4) + R x k

( 3) + R x k

( 2) + R x k

( 1) + R x k

( ) R x k

( ) O x k

( 4) + O x k

( 3) + O x k

( ) O x k m +

( 1) + O x k

( 2) + O x k

Fig. 5. Estimation of the trajectory for capturing.

Im
a
g
e

D
A
T
A
Moving
object
(1)
Estimate
x,y position of object
from Image data
(2)
Optimal
State observer
(Kalman Filter)
object (x,y)
k k
y x ,
(3)
Estimate
function of
and Predict
object trajectory
(4)
Decision
Optimal object target
and control
mobile robot
, v
s
t
a
t
e

o
f

o
b
je
c
t
Object trajectory
m k m k
y x
+ +
,
C
o
n
t
r
o
l
c
o
m
m
a
n
d
, v
Fig. 6. Mobile robot control for tracking.

(a) Trajectory.

(b) Estimated state.
Fig. 7. Results of simulation.
450 Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee
2-DOF
Active
camera
Gripper
2-Wheeled
Mobile
Robot
Pan
Tilt
Grip
Lift

Fig. 8. Components of ZIRO.

the estimated velocity and the real velocity, and the
error between the estimated angular velocity and the
real angular velocity. Although the errors of the
estimated velocities are high at first, they converge to
zero immediately.
Experiments that included the proposed algorithm
were applied to a mobile robot named ZIRO that was
developed in the Intelligent Robot Laboratory, PNU
[5], as shown in Fig. 8.
ZIRO, recognizes an object in the 3D space,
approaches the object to capture it, and carries it to a
goal position. For this purpose, ZIRO has a 2 d.o.f
active camera to search and to track an object and a
gripper to capture the object. The two-wheel
differential driving mechanism supports flexible
motion on a floor following the commands based on
the image captured by the 2 d.o.f pan/tilt camera. To
control the wheels in real time, a distributed control
system using a CAN-based network was implemented.
Three CAN-based controllers are connected to the
network, among which the main controller gathers the
gyro sensor data and sends them to the wheel
controllers. The CAN network is connected to a
higher-level ISA bus that connects the 2 d.o.f pan/tilt
camera controllers to the main controller (a Pentium
PC board). Every 100 msec, the position of an object
in 3D space was calculated using the posture of the
camera and the object position on the image frame to
plan the trajectory of the mobile robot. The planned
trajectory commands were sent to the wheel
controllers that use a PID algorithm to control the
angle every 10 msec. The functional structure of the
mobile robot is illustrated in Fig. 9.
Two experiments were performed to show the
tracking and capturing of a mobile object. Fig. 10
shows the experimental results of tracking a moving
object, an 8x6[cm] red-colored mouse of two wheels
with random velocities in the range of 25-35[cm/sec].
First, ZIRO detected the moving object using an
active camera. When the moving object was within
view, ZIRO tracked it according to following the
proposed method.
Fig. 11 illustrates that the mobile robot captured a ball,
moved to the target point and put the ball on the target
point. The minimum path was estimated using the
trajectories of the mobile robot and the object, while
the robot was tracking the object. Object the object
was grasped firmly with the aid of the touch sensors is
in the gripper.
Motor
(20V,0.7A
10kg-cm)
Left
wheel
Right
wheel
Left motor
controller
(87C196CA)
Right motor
controller
(87C196CA)
Gyro-sensor
controller
(87C196CA)
Gyro-sensor
ENV-05D
Encoder
(600ppr)
Motor Encoder
CAN Bus
Main controller
Pentium MMX-233
PWM signal
Velocity
Angular
Velocity
Reference
Velocity
Reference
Velocity
Direction
AN82527
CAN controller
card
ISA Bus
Frame grabber
Picolo Pro-2
Interface
device
Pan control
89C2051
Tilt control
89C2051
C
C
D
cam
era
2-DOF
Active camera
Image data
PID control
(10 msec)
Main Algorithm
100 msec
PCI Bus
Tilt
Pan

Fig. 9. Functional structure of the mobile robot.


Fig. 10. The results for tracking a moving object.


Fig. 11. Experimental results for capturing a ball.

Implementation of Tracking and Capturing a Moving Object using a Mobile Robot 451

6. CONCLUSION

This paper proposes a method of tracking and
capturing a moving object using an active camera
mounted on a mobile robot. The effectiveness of the
proposed method was demonstrated by simulations
and experiments, and was verified through the
following procedure.
1. Position estimation of a target object based on
the kinematic relationship of consecutive image
frames.
2. Movement estimation of the target object using a
Kalman filter for tracking.
3. Motion planning of a mobile robot to capture the
target object within the shortest time, based on its
estimated trajectory.
This method approach enables real- time tracking
and capturing operations since it extracts the distance
information from a single image frame and estimates
the next motion using the Kalman filter that provides
a closed- form solution.

REFERENCES
[1] K. Daniilidis and C. Krauss, Real-time tracking
of moving objects with an active camera, Real-
Time Imaging, Academic Press Limited, 1998.
[2] R. F. Berg, Estimation and prediction for
maneuvering target trajectories, IEEE Trans. on
Automatic Control, vol. AC-28, no. 3, pp. 294-
304, March 1983.
[3] S. M. Lavalle and R. Sharma, On motion
planning in changing partially predictable
environments, International Journal of Robotics
Research, vol. 16, no. 6, pp. 705-805, December
1997.
[4] H. W. Sorenson, Kalman filtering techniques,
Advances in Control Systems Theory and
Applications, vol. 3, pp. 219-292, 1996.
[5] J. W. Park and J. M. Lee, Robust map building
and navigation for a mobile robot using active
camera, Proc. of ICMT, pp. 99-104, October
1999.
[6] R. A. Brooks, A robust layered control system
for a mobile robot, IEEE Journal of Robotics
and Automation, vol. RA-2, no. 1, pp. 14-23,
April 1986.
[7] J. J. Leonard and H. F. Durrant-Whyte, Mobile
robot localization by tracking geometric
beacons, IEEE Trans. on Robotics and
Automation, vol. 7, no. 3, pp. 376-382, June 1991.
[8] D. J. Kriegman, E. Triendl, and T. O. Binford,
Stereo vision and navigation in buildings for
mobile robots, IEEE Trans. on Robotics and
Automation, vol. 5, no. 6, pp. 792-803, December
1989.
[9] R. E. Kalman, A new approach to linear
filtering and prediction problems, Trans, ASME,
J. Basic Eng, vol. 82D, pp. 35-45, March 1960.
[10] M. Y. Han, B. K. Kim, K. H. Kim, and J. M. Lee,
Active calibration of the robot/camera pose
using the circular objects, Trans. on Control,
Automation and Systems Engineering, vol. 5, no.
3, pp. 314-323, April 1999.
[11] D. Nair and J. K. Aggarwal, Moving obstacle
detection from a navigation robot, IEEE Trans.
on Robotics and Automation, vol. 14, no. 3, pp.
404-416, 1989.
[12] A. Lallet and S. Lacroix, Toward real-time 2D
localization in outdoor environments, Proc. of
the IEEE International Conference on Robotics
& Automation, pp. 2827-2832, May 1998.
[13] A. Adam, E. Rivlin, and I. Shimshoni,
Computing the sensory uncertainty field of a
vision-based localization sensor, Proc. of the
IEEE International Conference on Robotics &
Automation, pp. 2993-2999, April 2000.
[14] B. H. Kim, D. K. Roh, J. M. Lee, M. H. Lee, K.
Son, M. C. Lee, J. W. Choi, and S. H. Han,
Localization of a mobile robot using images of
a moving target, Proc. of the IEEE
International Conference on Robotics &
Automation, May 2001.
[15] V. Caglioti, An entropic criterion for minimum
uncertainty sensing in recognition and
localization part II-A case study on directional
distance measurements, IEEE Trans. on
Systems, Man, and Cybernetics, vol. 31, no. 2,
pp. 197-214, April 2001.
[16] C. F. Olson, Probabilistic self-localization for
mobile robots, IEEE Trans. on Robotics and
Automation, vol. 16, no. 1, pp. 55-66, February
2000.
[17] H. Zhou and S. Sakane, Sensor planning for
mobile robot localization based on probabilistic
inference using bayesian network, Proc. of the
4th IEEE International Symposium on Assembly
and Task Planning, pp. 7-12, May 2001.
[18] M. Selsis, C. Vieren, and F. Cabestaing,
Automatic tracking and 3D localization of
moving objects by active contour models, Proc.
of the IEEE International Symposium on
Intelligent Vehicles, pp. 96-100, 1995.
[19] H. Choset and K. Nagatani, Topological
simultaneous localization and mapping (SLAM):
Toward exact localization without explicit
localization, IEEE Trans. on Robotics and
Automation, vol. 17, no. 2, pp. 125-137, April
2001.
[20] S. Segvic and S. Ribaric, Determining the
absolute orientation in a corridor using
projective geometry and active vision, IEEE
Trans. on Industrial Electronics, vol. 48, no. 3,
pp. 696-710, June 2001.
[21] N. Strobel, S. Spors, and R. Rabenstein, Joint
452 Sang-joo Kim, Jin-woo Park, and Jang-Myung Lee
audio-video object localization and tracking,
IEEE Signal Processing Magazine, vol. 18, no. 1,
pp. 22-31, January 2001.
[22] R. G. Hutchins and J. P. C. Roque, Filtering and
control of an autonomous underwater vehicle for
both target intercept and docking, Proc. of the
4th IEEE International Conference on Control
Applications, pp. 1162-1163, 1995.
[23] J. Jang, C. Sun, and E. Mizutani, Neuro-Fuzzy
and Soft Computing, Prentice-Hall, 1997.
[24] E. Grosso and M. Tistarelli, Active/dynamic
stereo vision, IEEE Trans. on Pattern Analysis
and Machine Intelligence, vol. 17, no. 9, pp.
868-879, December 1995.


Sang-joo Kim received the Ph.D.
degree in Electrical Engineering from
Pusan University in 2005. His research
interests include robot vision, naviga-
tion, and image processing.





Jin-woo Park received the B.S. degree
in Electrical Engineering from Pusan
University in 2003. His research
interests include nonlinear control,
adaptive control, and system identify-
cation.




Jang-Myung Lee has been a Professor
in the Department of Electronics
Engineering at Pusan National Univer-
sity. He received the B.S. and M.S
degree in Electrical Engineering from
Seoul National University in 1980 and
1982, respectively and the Ph.D.
degree in Computer Engineering from
the University of Southern California
in 1990. His current research interests include intelligent
robotic systems, integrated manufacturing systems,
cooperative control and sensor fusion. Dr. Lee is an IEEE
Senior member and a member of ICASE and IEEK.

You might also like