Automatic Number Plate Recognition System: Hakob Sarukhanyan, Souren Alaverdyan, and Grigor Petrosyan

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Automatic Number Plate Recognition System

Hakob Sarukhanyan, Souren Alaverdyan, and Grigor Petrosyan


Institute for Informatics and Automation Problems of NAS RA, Yerevan, Armenia
hakop,souren,grigor@ipia.sci.am

ABSTRACT
The software system of plate identification and a) The license plate is a rectangular region of an easily
recognition from video images is presented in this discernable color;
paper. b) The width-height relationship of the license plate is
known in advance;
Keywords c) The orientation of the license plate is approximately
License plate, Hough transform, digital filter, optical aligned with the axes;
character recognition, chain code. d) Orthogonality is assumed, meaning that a straight
line is also straight in the image and not optically
1. INTRODUCTION distorted.
The automatic detection and recognition of car number
plates has become an important application of artificial
vision systems [1-8]. The object is to develop a system
whereby cars passing a certain point are digitally
photographed, and then identified electronically by
locating the number plate in the image, segmenting the
characters from the located plate and then recognizing
them. Some applications for a number plate recognition
system are: 1) Traffic flow measurement and planning;
2) Tracking stolen vehicles; 3) Control and security at
tolling areas, e.g. parking garages; 4) Traffic law
enforcement (automatically identifying speed ers, illegal
parking, etc.); 5) The system could also be adapted for
use in reading e.g.; 6) Warehouse box stencil codes; 7)
Train rolling stock codes; 8) Aircraft tailcodes.

This system consists of two high-level stages: In the


first stage, the number plate is detected and segmented
from a digital image of the car being examined. The
plate location and size are passed to the optical
character recognition (OCR) subsystem.

Fig. 1 shows the proposed license plate recognition


process. There are six primary algorithms that the
software requires to identify a licence plate: Fig 1: Diagram of the proposed LPR process.
1. Plate localisation – responsible for finding and
isolating the plate on the picture; 2. EXTRACTING LICENSE PLATES BY
2. Plate orientation and sizing – compensates for the
HOUGH TRANSFORM
skew of the plate and adjusts the dimensions to the
required size;
This section presents a method for extracting license
3. Normalisation – adjusts the brightness and contrast of
plates based on the Hough transform. The first step is to
the image;
threshold the gray scale source image. Then the
4. Character segmentation -finds the individual
resulting image is passed through two parallel
characters on the plates;
sequences, in order to extract horizontal and vertical
5. Optical character recognition;
line segments respectively. The first step in both of
6. Syntactical/Geometrical analysis – check characters
these sequences is to extract edges. The result is a
and positions against country specific rules
binary image with edges highlighted. This image is then
The methods are all based on several assumptions used as input to the Hough transform, which produces a
list of lines in the form of accumulator cells. These cells
concerning the shape and appearance of the license are then analyzed and line segments are computed.
plate. The assumptions are listed below: Finally the list of horizontal and vertical line segments
are combined and any rectangular regions matching the origin. This is also illustrated in Fig. 3. Also in contrast
dimensions of a license plate are kept as candidate to the previous method, where points in the image
regions. This is also the output of the algorithm. corresponded to lines in parameter space, in the form
shown in Eq. (2) points correspond to sinusoidal curves
in the (r , µ ) plane.

Fig. 3: Example image and corresponding Hough transform

Fig. 3 shows two points, ( x1 , y1 ) and ( x 2 , y 2 ), and


Fig. 2: Overview of the Hough method their corresponding curves in parameter space. As
expected, the parameters of their point of intersection in
As shown in Figure 2 the algorithm behind the method parameter space correspond to the parameters of the
consists of five steps. The first step is to detect edges in dashed line between the two points in the original
the source image. This operation in effect reduces the image. In accordance with the preceding paragraph, the
amount of information contained in the source image by goal of the Hough transform is to identify points in
removing everything but edges in either horizontal or parameter space, where a high number of curves
vertical direction. This is highly desirable since it also intersect. Together these curves then correspond to an
reduces the number of points the Hough transform has equal amount of collinear points in the original image.
to consider. The edges are detected using spatial A simple way to solve this problem is to quantize the
filtering. The choice of kernels was partly based on parameter space. The resulting rectangular regions are
experiments and partly because they produce edges called accumulator cells and each cell corresponds to a
with the thickness of a single pixel, which is desirable single line in the image. The algorithm behind the
input to the Hough transform. Hough transform is now straight forward to derive. First
the accumulator array is cleared to zero. Then for each
3. LINE DETECTION USING HOUGH point in the edge image iterate over all possible values
TRANSFORM of µ and compute r = 1 2 using Equation (2). Finally
for each computed ( 1 2 , µ ) value, increment the
The Hough transform is a method for detecting lines in
corresponding accumulator cell by one. Since the
binary images. The method was developed as an
algorithm iterates over all points it is clear that it
alternative to the brute force approach of finding lines,
3
performs in O(n) time.
which was computationally expensive ( O(n )). In
contrast the Hough transform performs in linear time. 4. PLATE CANDIDATES RECTANGLES
The Hough transform works by rewriting the general FILTERING
equation for a line through ( xi , y i ) as:
yi = axi + b ⇒ b = −axi + yi (1) After identification of plate candidates we must realize
some special filters before optical character recognition
For a fixed ( xi , yi ) , Eq. (1) yields a line in parameter stage. Below we give a short description of filtering
space and a single point on this line corresponds to a with real plate candidate example (see Fig. 4).
line through ( xi , y i ) in the original image. Finding
lines in an image now simply corresponds to finding
intersections between lines in parameter space. In Fig.4. Plate candidate
practice, instead the Eq. (1) is used the following:
Step 1. Apply low frequency filter with convolution
x cos µ + y sin µ = r (2) matrix
1 1 1
 
In Eq. (2) the parameter µ is the angle between the H= 1
10 1 2 1
normal to the line and the x-axis and the r parameter is 1 1 1
 
the perpendicular distance between the line and the The result of this filter given in Fig. 5.
as shown in a Fig. 11, then using basic directions (see
Fig. 12) we obtain appropriate chain codes of symbols.
Fig.5. Plate after convolution
5
6
Step 2. Apply the hyperbolic sinusoidal filter for 4
brightness enhancement: sh( x) = (e x + e − x ) / 2,
where x ∈ [0,1] is a normalized pixel value of plate 7 3
image. The result given in Fig.6.
8
2
1
Fig. 6. The result of sh(x) filter. Fig. 12. Basic directions schemes

The appropriate chain codes for considering example


Step 3. Apply the homogeneity enhancement filter are given in the table below
h1 ( x) = ( x k ) x , where x ∈ [0,1], k = 3,4, K,20.
Symbol Symbol Chain
number code
1 31
Fig. 7. The result of h1 ( x) filter.
2 31823
Step 4. Apply image binarization filter
3 13
∗1, if xi , j ≤ si , j ,
x = i, j
0, otherwise, 4 82173
where the threshold si , j is defined as follows 5 1357
i+2 j +2
1
si , j = ∑
25 p =i − 2

q = j −2
x p.q , x p ,q ∈ [0,255] 6 1357

7 38

Fig. 8. The result of h1 ( x) filter. Step 8. Symbol recognition. Thus, each symbol in the
plate is associated with chain code. Note that these
Step 5. Apply the thinning filter over the binary image codes can be different for the same symbol. Moreover,
to obtain the contour with one pixel thickness [pratt]. the same chain code can be associated to different
The result of this filter given in Fig. 9. symbols. This phenomenon is called collision. Note that
to provide symbol recognition uniqueness some
additional parameters, such as segment slope angle,
symbol height, the index in the row, and etc., are
Fig. 9. The result of thinning filter. calculating when construct chain code.
For example, the first symbol “1” got chain cod 31, the
Step 6. Using the vertical and horizontal projection last one “7” got chain code 38. If we don’t consider
removes the false piece on the plate and defines also the segment vertical slope value of symbol “7”, we have
number of rows and symbols in the plate. chain code 31, so collision appear (symbols “1” and “7”
got identical chain code 31). If considering segment
slope parameter, symbol “7” gets chain code 38, but not
31. It should be noted that everything depends on
Fig. 10. The result after removing false pieces. scanned image quality. If symbols’ parameters are very
fewely different each of other, then the collision can
Step 7. Defining the chain code of symbols and its appear.
recognition. To create chain code of symbols from Fig.
10 at first symbols are approximated by linear segments
5. ACKNOWLEDGEMENT

The authors would like to thank ISTC for supporting


Fig. 11. Approximation by linear segments this research around the project A-1451.
REFERENCES
[1] R. Gonzalez, R. Woods R., Digital Image
Processing, Prentice Hall, New Jersey, 2002

[2] V. Shapiro, D. Dimov, S. Bonchev, V. Velichkov,


G. Gluhchev, “Adaptive License Plate Image
Extraction”, International Conference Computer
Systems and Technologies, Rousse, Bulgaria, 2004

[3] Y. Zhang, C. Zhang, “New Algorithm for


Character Segmentation of License Plate”, Intelligent
Vehicles Symposium, IEEE, 2003

[4] Shyang-Lih Chang, Li-Shien Chen, Yun-Chung


Chung, Sei-Wan Chen, “Automatic License Plate
Recognition”, IEEE Trans. On Intelligent
Transformation Systems, vol. 5, No. 1,2004

[5] Y. Cui, Q. Huang, “Character extraction of license


plates from video”, IEEE Conf. Computer Vision and
Pattern Recognition, 1997, pp. 502–507.

[6] D. S. Gao, J. Zhou, “Car license plates detection


from complex scene,” Proc. 5th Int. Conf. Signal
Processing, vol. 2, 2000, pp. 1409–1414.

[7] S. K. Kim, D. W. Kim, H. J. Kim, “A recognition of


vehicle license plate using a genetic algorithm based
segmentation,” Proc. Int. Conf. Image Processing, vol.
2, 1996, pp. 661–664.

[8] Charl Coetzee, Charl Botha, David Weber. “PC


Based Number Plate Recognition System”, IEEE, 1998

You might also like