Optical Flow Based System Design For Mobile Robots: Mehmet Serdar Guzel Robert Bicker
Optical Flow Based System Design For Mobile Robots: Mehmet Serdar Guzel Robert Bicker
Optical Flow Based System Design For Mobile Robots: Mehmet Serdar Guzel Robert Bicker
Mobile Robots
Mehmet Serdar Guzel Robert Bicker
School of Mechanical and System Engineering School of Mechanical and System Engineering
Newcastle University Newcastle University
Newcastle, UK Newcastle, UK
m.s.guzel@newcastle.ac.uk robert.bicker@newcastle.ac.uk
Abstract— This paper presents a new optical flow based proposed [3]. A portable platform was introduced that supports
navigation strategy, based on a multi-scale variational approach, real-time computer vision applications for mobile robots via
for mobile robot navigation using a single Internet based camera optical flow [4]. Mapless navigation based on a simple balance
as a primary sensor. Real experiments to guide a Pioneer 3-DX strategy consisting of balancing the amount of left and right
mobile robot in a cluttered environment are presented, and the side flow to avoid obstacles was used [5], moreover a new
analysis of the results allow us to validate the proposed behavior
algorithm for navigation, claimed that it is fast enough to give
based navigation strategy Main contributions of this approach is
that it proposes an alternative high performance navigation the mobile robot the capability of reacting to any
algorithm for the systems, consuming high computation time for environmental change in real-time, via optical flow involving
image acquisition an enhanced image segmentation method was proposed [6].
Although several different methodologies have been recently
proposed, we are still far from a satisfactory solution.
Keywords—optical flow, variational approach, mobile robot
navigation, behaviour, obstacle avoidance , time to contact , Previous researchers have employed either standard
I. INTRODUCTION window matching algorithm or gradient based approaches.
However, the main weakness of these approaches is failure to
Vision is one of the key sensing methodologies, and has address how to deal with large displacements. To overcome
been used by researchers for many years to assist robot this problem, a multi-scale approach has been adapted to these
navigation. It can gather detailed information about the methods [7, 8]. However, this method increases time
environment which may not be available by combinations of computations and may not be suitable for real time navigation
other sensors. The computational complexity of an image tasks. Hence, a novel algorithm involving a multi-scale
processing algorithm is one of the most critical aspects for real variational optical flow algorithm via reliable and practical
time applications. If any real time system has fast image image segmentation is presented to contribute to mapless
acquisition and processing ability some loss of reliability or navigation strategy, guiding the local optimization to large
accuracy in the algorithms can be tolerated. The only way to displacement solutions. A sensing system based on optical
have an appropriate solution for a low cost navigation system flow and time-to-collision calculation is tested on a Pioneer 3-
is to determine an efficient and optimum method based on the DX mobile robot equipped with TV7230 IP based camera
problem at hand. having 25 frames per second capacity as shown in Fig.1; and
Optical flow is an appropriate methodology for vision based all calculations are performed onboard the robot.
mapless navigation strategies. Several optical flow based
navigation strategies using more than one camera have been
introduced by researchers; biologically inspired behaviors, II. OPTICAL FLOW
based on stereo vision have been adapted for obstacle Optical flow is considered as a 2D vector field describing
avoidance [1]. A trinocular vision system for navigation has the apparent motion of each pixel in successive images in a 3D
been proposed [2]. Both these methods, in some way, emulate scene taken at different times [9]. The main idea is based on
the corridor following behavior. Their main disadvantage is the assumption that pixel intensity values for corresponding
that they require more than one camera. And as require 3D points on the images are the same. Thus, if two
processing of images from more than one camera, increasing consecutive images have been obtained at different time
computational cost of the system and the implementation cost intervals to and t1, the basic concept is to detect motion using
of the software. There are few works have been proposed image differencing [10]. It is assumed that for a given scene
based on single camera. An ecological psychology, including point, the corresponding image point intensity I remain
control laws using optical flow and action modes which can constant over the time which is referred as conservation of
avoid obstacles and play tag solely using optical flow was image intensity. If any scene point projects onto image point
978-1-4244-6506-4/10/$26.00 2010
c IEEE 545
( ݔǡ ) ݕat time t and onto image point ሺ ݔ ߜݔǡ ݕ ߜݕሻ at smoothness terms, shown as (7) involving some ןcontrol
time ሺ ݐ ߜݐሻ , Equation (1) can be deduced based on the parameters where >ן0.
assumption of the brightness of a point in an image is constant.
Main objective is to find the vector ሺߜݔǡ ߜݕሻ that minimise the ଶ
Edata(u,v)= ൫ሺܫሺ ݔ ߜݔǡ ݕ ߜݕǡ ݐ ߜݐሻ െ ܫሺݔǡ ݕǡ ݐሻ൯ ݀ݕ݀ݔ
error given by (2) where ܵሺሻ represents a function measures
the similarity between pixels (5)
ଶ ଶ
Esmooth(u,v) = หଶ ߜݔห หଶ ߜݕห ݀ݕ݀ݔ (6)
డூ డூ డூ ଶ
ߜݔ ߜݕ ߜݐ ൌ Ͳ (3) ܧሺ݄ሻ ൌ නሺܫଵ ሺݔǡ ݕሻ െ ܫଶ ൫ ݔ ݑሺݔǡ ݕሻǡ ݕ ݒሺݔǡ ݕሻ൯ ݀ݔ
డ௫ డ௬ డ௧
ାଵ
ܽοݐ ାଵ ାଵ ାଵ ାଵ
ݑǡ ൌ ݑǡ ൣ൫ݑାଵǡ ݑିଵǡ ݑǡାଵ ݑǡିଵ
ʹሺοݔሻଶ
െ Ͷݑାଵ ൯
൫ݑାଵǡ ݑିଵǡ ݑǡାଵ ݑǡିଵ െ Ͷݑ ൯൧
ቀܫଵǡఝ ൫ݖǡ ൯ െ ܫଶǡఝ ൫ݖǡ ݄ఝǡǡ ൯ቁ
ܫ כଶǡ௫ǡఝ ൫ݖǡ ݄ఝǡǡ ൯
(12)
ାଵ
ܽοݐ ାଵ ାଵ ାଵ ାଵ
ݒǡ ൌ ݒǡ ൣ൫ݒାଵǡ ݒିଵǡ ݒǡାଵ ݒǡିଵ
ʹሺοݔሻଶ
െ Ͷݒାଵ ൯
൫ݒାଵǡ ݒିଵǡ ݒǡାଵ ݒǡିଵ െ Ͷݒ ൯൧
ቀܫଵǡఝ ൫ݖǡ ൯ െ ܫଶǡǡఝ ൫ݖǡ ݄ఝǡǡ ൯ቁ
ܫ כଶǡ௬ǡǡఝ ൫ݖǡ ݄ఝǡǡ ൯
(13)
σȁݓ ȁ െσȁݓோ ȁ
ߠ௪ ൌ ቆ ൈ ͳͷቇ
σȁݓ ȁ σȁݓோ ȁ
(21)
IV. CONCLUSION
The developed system was on a Pioneer 3-DX mobile robot,
whose onboard computer is based on the Intel Pentium 1.8 GHz
(Mobile) processor, and includes 256 Mbytes of RAM memory.
The robot is equipped with TV7230 internet based pan tilt
camera having 25 frames. The software architecture of proposed
Figure 5. The state-machine corresponding to the proposed navigation system.
system is supported by CImg library [14] and Player
Architecture [15], which are open-source software projects. The
system was tested in Robotics’ lab at Newcastle University,
containing chairs, furniture, and computing equipment. Time to
Contact (TTC) graph of chair avoiding maneuver, shown in Fig.
8, related to the environment shown in Fig. 7. According to the
theory of estimation of a translation sequence, estimated TTC
value at the obstacle’s side decreases while approaching. The
analyze results corresponding to the computations involved in
the sensing system are shown in Table I for 160×120 resolution.
According to the results, the time consumed during the image
acquisition is the main problem, however the test results
demonstrate the robot can successfully navigate in the
environments, shown in Fig 6, without colliding any obstacle by
20 minutes employing the multi-scale variational algorithms
and proposed navigation strategy. The proposed multi scale
variational approach extends the applicability of optical flow to Figure 8. TTC values for avoiding maneuver.
fields with larger displacements, and is adapted to vision based
navigation strategy successfully. In Fig 9, a real navigation
scenario is demonstrated in the laboratory. Every behavior is
illustrated with a different color such as orange for forward,
yellow for emergency, red for left turn and blue for right turn. In
order to evaluate the performance of the proposed algorithm, it
is compared with Lucas Kanade’s method [8] and Horn-
Schunck’s method [11], shown in Table II. It evaluates
algorithms’ performance comparing the corridor centering
behavior, total flow computation and safe navigation time.
TABLE I.
REFERENCES
[5] K. Souhilla and A. Karim, “Optical Flow based robot obstacle avoidance,”
Advanced Robotic Systems , International Journal of Advanced Robotic
Systems, vol 4, no. 1, pp 13-16, 2007.
Figure 9. A real navigation scenario at the laboratory. [11] B. K. G. Horn and B.G Schunck, ”Determining optical flow,” Artificial
Intelligence, vol 17, pp 185-203. 1981.
TABLE II. [12] F. Lauze and M. Nielsen, “A variational algorithm for motion
compensated inpainting,”. In S. Barman A. Hoppe and T. Ellis, editors,
COMPRASION OF FLOW TECHINUQUES British Machine Vision Conference, vol 2, pp. 777-787,2004.
Method Flow Centering Error Average
Time(s) (cm) Navigation [13] L. Alvarez, J. Sanchez and J. Weickert, “Scale-Space Theory in
Time(m) Computer Engineering, vol 1682/1999, pp. 235-246, 1999
L&&K 0.101 9 , for 3.00 m 20
[14] CImg Library, (2009), Available from, http://cimg.sourceforge.net/
H&&S 0.055 16, for 3.00 m 10
[Accessed 05/05/2009].
Variational Alg. 0.120 6, for 3.00 m 20
[15] Player project, (2009), Available from, http://playerstage.sourceforge.net
[Accessed 05/05/2009].