Real Time Car Parking System Using Image Processing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Real Time Car Parking System Using Image Processing

Ms.Sayanti Banerjee Ms. Pallavi Choudekar Prof .M.K.Muju


Ajay Kumar Garg Engineering Ajay Kumar Garg Engineering Ajay Kumar Garg Engineering
College, Department of electrical College, Department of electrical College, Department of mechanical
and electronics engineeering and electronics engineering engineering
Ghaziabad,UP ,India Ghaziabad,UP ,India Ghaziabad,UP ,India
getsayanti@yahoo.com pallaveech@yahoo.com muju@iitk.ac.in

Abstract—. Car parking lots are an important object class in In the present work the designed system aims to achieve
many traffic and civilian applications. With the problems of the following.
increasing urban traffic congestion and the ever increasing • Images of the incoming cars are captured in real
shortage of space, these car parking lots are needed to be well time
equipped with automatic parking Information and Guidance • Depending upon the status of car occupancy inside
systems. Goals of intelligent parking lot management include
they are allowed to enter the parking lot
counting the number of parked cars, and identifying the
available location. This work proposes a new system for • Once the parking in the left side is full, car is
providing parking information and guidance using image directed towards right.
processing. The proposed system includes counting the number • Once both sides of the parking lot are full, no car is
of parked vehicles, and identifying the stalls available. The allowed to enter the parking lot.
system detects cars through images instead of using electronic
sensors embedded on the floor. A camera is installed at the Components of the current project
entry point of the parking lot. It will capture image sequences. • Hardware module
Setting image of a car as reference image, the captured images • Software module
are sequentially matched using image matching. For this • Interfacing
purpose edge detection has been carried out using Prewitt edge
detection operator and according to percentage of matching Hardware Module
guidance and information is provided to the incoming driver.
Image sensors: In this project a USB based web camera
Keywords- Car parking, Image Processing, Edge detection,
has been used.
image matching, Prewitt operator. Computer: A general purpose PC as a central unit for
various image processing tasks has been
used.
I. INTRODUCTION Platform: consisting of a few toy vehicles and LEDs
At the parking lots located in resting facilities of the (prototype of the real world traffic light
expressway, information concerning the state of congestion control system).
is offered to drivers as previous notice by the traffic control
system. To realize this state of congestion, the sensors for Software Module: MATLAB version 7.8 as image
detecting cars have already been set up at the entrance and processing software comprising of specialized modules that
the exit or under the road surface of the parking section. On perform specific tasks has been used.
the other hand, as the hardware for image processing has Interfacing: The interfacing between the hardware
been developing, the image processing has been applied to prototype and software module is done using parallel port of
many kinds of purpose recently. The following is a brief the personal computer. Parallel port driver has been installed
survey of the papers that have emphasized the detection of in the PC for this purpose.
parking cars in the parking lot. [2] used the change of the
variance of brightness on the road surface in the stationary II. METHODOLOGY
image (difference between consecutive frames). [4] Showed Following are the steps involved
the method to detect the car moving by the subtraction • Image acquisition
between consecutive images. In [6], the authors proposed the • RGB to gray conversion
method to count cars by tracking the moving objects for the
• Image enhancement
whole area of the outdoor parking lot as compared with for
every parking division in [4]. [5] Showed that it is effective • Image matching using edge detection
to use time differential images to extract moving objects
from stationary objects. However, a moving object can often
be taken as many regions (called moving regions) in the
differential image.

___________________________________
978-1-4244 -8679-3/11/$26.00 ©2011 IEEE

99
Authorized licensed use limited to: Zhejiang University. Downloaded on November 13,2024 at 03:24:00 UTC from IEEE Xplore. Restrictions apply.
Procedure • Piecewise linear transformation functions
Phase1: The third method i.e., power law transformation has
been used in this work. The power law transformations have
• Initially image acquisition is done with the help of the basic form
web camera
s = cr γ
• First image of a car is captured
Where S is output gray level, r is input gray level, c and Ȗ are
• This image is saved as reference image at a positive constants. For various values of gamma applied on
particular location specified in the program an acquired image we obtained the following graph shown in
• RGB to gray conversion is done on the reference Fig. 1.
image
• Now gamma correction is done on the reference
gray image to achieve image enhancement
• Edge detection of this gamma corrected reference
image is done thereafter with the help of prewitt
edge detection operator
Phase2:
• Images of the cars entering the parking lot are
captured at an interval of 2 seconds
• RGB to gray conversion is done on the sequence of
captured images
• Now gamma correction is done on each of the
captured gray image to achieve image Fig. 1
enhancement From this figure it is evident that the power law curves
with fractional values of Ȗ map a narrow range of dark input
• Edge detection of these real time captured images values into a wide range of output values with the opposite
of the cars is now done with the help of Prewitt being true for higher values of input levels. It depicts the
edge detection operator
effect of increasing values of Ȗ >1. The images are shown
Phase3: with Ȗ =1, 2,3,4,5 as may be seen, the figure with Ȗ =1 gives
• After edge detection procedure both reference and the best results in terms of making fine details identifiable.
real time images are matched and if they match As is evident the fractional values of Ȗ can not be used since
each other by more than 90% the incoming car is these attempts show a reverse effect of brightening the image
allowed to enter the parking lot. still further which we may find as undesirable for the present
case. The results obtained after applying Ȗ = 0.5 and Ȗ = 4 are
• In this project the designed hardware has been shown in Fig. 2 and Fig. 3. respectively.
considered to have a maximum capacity of 20 cars
which is divided into two parts-right side and left
side. First the cars are guided towards left. Once
the left side is full, the cars are directed towards the
right side of the parking lot.

III. IMAGE ENHANCEMENT


The acquired image in RGB is first converted into gray.
Now we want to bring our image in contrast to background
Fig. 2 Gamma = 0.5
so that a proper threshold level may be selected while binary
conversion is carried out. This calls for image enhancement
techniques. The objective of enhancement is to process an
image so that result is more suitable than the original image
for the specific application. There are many techniques that
may be used to play with the features in an image but may
not be used in every case. Listed below are a few
fundamental functions used frequently for image
enhancement.
• Linear (negative and identity transformations)
• Logarithmic (log and inverse log transformations) Fig. 3 Gamma = 4
• Power law transformations(gamma correction)

100
Authorized licensed use limited to: Zhejiang University. Downloaded on November 13,2024 at 03:24:00 UTC from IEEE Xplore. Restrictions apply.
IV. EDGE DETECTION AND IMAGE MATCHING The edge detection operator we have used in the present
Step 1: Edge detection: Among the key features of an work is Prewitt. Mathematically, the operator uses 3X3
image i.e. edges, lines, and points, we have used edge in our kernels which are convolved with the original image to
present work which can be detected from the abrupt change calculate approximations of the derivatives – one for
in the gray level. An edge essentially demarcates between horizontal changes, and one for vertical. If we define A as
two distinctly different regions, which means that an edge is the source image, and Gx and Gy are two images which at
the border between two different regions. each point contain the horizontal and vertical derivative
Here we are using edge detection method for image approximations, the later are computed as
matching:
• Edge detection methods locate the pixels in the Gx = ª« − 1 0 1º
*A & Gy = ª« − 1 − 1 − 1º» *A (6)
«− 1 0 1 »»
image that correspond to the edges of the objects «0 0 0»
«¬ − 1 0 1 »¼ «¬ 1 1 1 »¼
seen in the image.
• The result is a binary image with the detected edge
pixels. Edge detection of the captured image is done using
• Common algorithms used are Sobel, Prewitt and Prewitt edge detection operator which is shown in Fig. 4.
Laplacian operators.
We have used gradient based Edge Detection that detects
the edges by looking for the maximum and minimum in the
first derivative of the image.
• First derivative is used to detect the presence of an
edge at a point in an image.
• Sign of the second derivative is used to determine
whether an edge pixel lies on the dark or light side
of an edge.
The change in intensity level is measured by the gradient Fig. 4
of the image. Since an image f (z, y) is a two dimensional Edge detected image (Prewitt)
function, its gradient is a vector

ª df º
ªG º « » Step 2: Image matching: Edge based matching is the
x dx
«G » = « df » process in which two representatives (edge) of the same
¬ y ¼ « » objects are pared together. Any edge or its representation on
«¬ dy »¼ ………… (1) one image is compared and evaluated against all the edges
on the other image.
The magnitude of the gradient is given by Edge detection of reference and the real time images has
been done using Prewitt operator. Then these edge detected
G[f (x,y)] = ¥ Gx2 +Gy2 (2) images are matched and accordingly the guidance can be
provided to the driver.

The direction of the gradient is


B(z, y) = tan-' (Gy/G) (3)
where the angle B is measured with respect to the X-axis.
Gradient operators compute the change in gray level
intensities and also the direction in which the change occurs.
This is calculated by the difference in values of the
neighboring pixels, i.e., the derivatives along the X-axis and
Y-axis. In a two-dimensional image the gradients are Reference image captured image
approximated by
Fig. 5
Gx= f(i+1,j)-f(i,j) (4) V. EXPERIMENTAL RESULTS
Gy=f(i,j+1) -f(i,j) (5)
Experiments are carried out and we get the following results.
Gradient operators require two masks, one to obtain the
V. EXPERIMENTAL RESULTS
X-direction gradient and the other to obtain the Y-direction
gradient. These two gradients are combined to obtain a Result 1: Left side parking
vector quantity whose magnitude represents the strength of At first the parking lot is vacant with maximum capacity
the edge gradient at a point in the image and whose angle of 10 cars in the left side. Therefore first 10 images of the
represents the gradient angle.

101
Authorized licensed use limited to: Zhejiang University. Downloaded on November 13,2024 at 03:24:00 UTC from IEEE Xplore. Restrictions apply.
incoming cars are matched with the reference image and cars
are allowed to enter according to the percentage of matching.

Screenshot 1

Screenshot 5

Screenshot 2

Result 3: Similarly, after the entry of 10 vehicles on the


right side, cars are not allowed to enter the parking lot.

Screenshot 6
Screenshot 3

Result 1: Right side parking VI. SUMMARY AND CONCLUSIONS


After the entry of 10 vehicles on the left side, cars are
guided towards the right side parking. A vision-based car parking lot management system is
proposed in this paper. The vision-based method makes it
possible to manage large area by just using several cameras.
Screenshot 4
It is consistent in detecting incoming cars because it uses
actual car images. It is cheap and easy-installed because of
the simple equipments. Drivers can get useful real-time

102
Authorized licensed use limited to: Zhejiang University. Downloaded on November 13,2024 at 03:24:00 UTC from IEEE Xplore. Restrictions apply.
parking lot information from this system by the guidance
information display.
REFERENCES
[1] Christopher M. Bishop(2006), “Pattern recognition and machine
learning,” Springer.
[2] E.Maeda, and K.lshii((1992), "Evaluation of Normalized
Principal Component Features in Object Detection", IEICE
Trans. Information and Systems, Vol.J75-D-11, 3, pp.520-5?9
[3] I. Masaki (1998), “Machine-vision systems for intelligent
transportation systems,” IEEE Conference on Intelligent
Transportation System, Vol.13 (6), pp.24-31,.
[4] K.Ueda, I.Horiba, K.Ikeda, H.Onodera, and S.Ozawa(1991), :
An Algorithm for Detecting Parking Cars by the Use of
Picture Processing "(1981), IEICE Trans. Information and
Systems, Vol.J74-D-I1. IO. pp.1379-1389
[5] M.Yachida, M.Asada, and S.Tsuji : '' Automatic Analysis of
Moving Images", IEEE Trans. Pattern Anal. & Mach.
Intell, Vol.PAM1-3, I . pp.12-19
[6] T.Hasegawa, and S.Ozawa(1993), '' Counting Cars by
Tracking of Moving Object in the Outdoor Parking Lot ",
IElCE Trans. Information and SystemsVol.J76-D-II, 7,
pp.1390-139A

103
Authorized licensed use limited to: Zhejiang University. Downloaded on November 13,2024 at 03:24:00 UTC from IEEE Xplore. Restrictions apply.

You might also like