Human Iris Recognition Based On Hybrid Technique

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Journal of Computer Science

Original Research Paper

Human Iris Recognition Based on Hybrid Technique


1
Asaad Noori Hashim and 2Bushraa Mahdi Al-Hashimi
1Department of Computer Science, Faculty of Computer Science and Mathematics, University of Kufa, Iraq
2Department of Computer Science, Faculty of Education, University of Kufa, Iraq

Article history Abstract: Iris recognition is a biometric technique that uses iris pattern
Received: 25-08-2019 information to detect person identification. Initially, the system find out the
Revised: 18-11-2019 boundary of the pupil and iris. Then, Circular Hough transform used to find
Accepted: 03-12-2019 out the center of both pupil and iris in order to crop iris part from the eye
Corresponding Author:
image. After that, Daugman’s Rubber Sheet model utilized for performing
Asaad Noori Hashim the normalizing step. Then, features extracted based on Legendre moment
Department of Computer and Local Quantized. Several orders value with many region of iris have
Science, Faculty of Computer been used to get best value, which satisfied the highest recognition rate.
Science and Mathematics, Matching was performed by City Block Distance. The simulation was
University of Kufa, Iraq carried out using samples from CASIA.v4-Interval database, the main tool
Email: Asaad.alshareefi@uokufa.edu.iq
for programming is MATLAB.

Keywords: Iris Recognition, Biometric, Feature Extraction, Legendre,


Local Quantized Pattern (LQP)

Introduction Biometric identifiers are categorized either as


physiological or behavioral. The physiological type is
In this automated world, there is a rapid development specifically related to the shape of the body (e.g.,
in modern science and technology and a widespread use
fingerprint, palm veins, face recognition, DNA, palm
of computers and electronic devices along with a
print, hand geometry, iris recognition, retina and
growing world population. The main problem, however,
odor/scent). The behavioral category is related to
is security in different aspects that necessitates the need
for a very precise and reliable authentication behavioral nature of human beings (e.g., rhythm, gait
technology. Authentication plays a fundamental role, as and voice). These biomedical features are unique, remain
it is first line of defense against intruders. Traditional constant for each person and can be used to identify
systems should, therefore be replaced by accurate, individuals owning to the difficulty to replicate and reuse
convenient and effective alternatives. In addition, by someone other than a biometric owner.
governments and private sectors are increasingly Automated identification systems based on iris
encouraging the use of biometric systems. recognition is often known to be the most reliable of
The three basic types of authentication system are all biometric methods. The probability of finding two
something already known such as a passwords, persons with identical iris pattern is almost zero. Iris
something you got such as a card or token and something has several advantages. First, it is characterized by a
you such as biometric measures. unique texture pattern, it has a very rich and complex
Any physiological or behavioral attribute is biometric random form that includes the unique features of each
if satisfies the following criteria: individual and is not affected by genetic factors but is
only affected by the primary environment of the fetal.
 Universality all humans have it It is remarkable that even twins have a different
 Distinctiveness: Be as different as each individual texture of the iris and even in the same person the left
 Invariance: not change over time eye pattern is different from the right eye. Second, the
 Collectability: Easily collectible in terms of iris begins to form during the third month of
acquisition, Digitization and feature extraction from pregnancy. The iris pattern is largely shaped by the
the population age of three years and is almost constant throughout
 Performance: The availability of data collection and the life in the absence of external damages. Third,
guarantee to achieve high accuracy unlike other biomedical properties, the iris is
 Acceptability: The readiness of the population to protected from external environment by corneal unless
present that attribute to the recognition system there is an eye disease.

© 2019 Asaad Noori Hashim Al-Shareefi and Bushraa Mahdi Al-Hashimi. This open access article is distributed under a
Creative Commons Attribution (CC-BY) 3.0 license.
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

created for an individual and then a match is found in the


database of pre-registered templates.
The primary stages of an iris recognition system
design include the following:

 Localization of pupil and iris


 Segmentation borders of the iris and the pupil
 Normalization of the iris part
 Feature extractions and
Fig. 1: Human iris  Matching

The Human Iris Authentication is achieved by comparing the


generated template to the iris image with the values
Iris is the colored circular region of the eye. It is
templates which are stored in the database.
close to its center, the pupil which is a circular hole. Iris
The matching is perform among one to many
consists of the sphincter and the dilator muscles, which templates for the identification or the matching between
adjust the space of the pupil and therefore, control the one to one templates for verification.
amount of light entering through the pupil. The average
diameter of the iris is 12 mm. The differentiation is Related Work
shaped by fibrous and cellular structures such as
ligaments, grooves, cysts, rings, frills, crowns, eyelashes, Jain et al. (2012) presented a biometric algorithm for
sometimes moles, freckles, components of human eyes iris recognition using Fast Fourier Transform and
have been explained in Fig. 1. calculating all possible sets of Normalized Moment
which are invariant to rotation and scale transformation.
Biometric History The Fast Fourier Transform converts image from spatial
domain to frequency domain. It also filters noise in the
The idea of using personal identity patterns is
image and gives more information that is precise. The
proposed in 1936 by ophthalmologist Frank Burch. By
paper used the CASIA iris image database ver. 1.0 and
1980, the idea appeared in the James Bond's films, but it ver. 2.0. As a conclusion, the algorithm achieved a higher
was still science imagination and guesswork. In 1987, Correct Recognition Rate (Jain et al., 2012).
two ophthalmologists Aram Safir and Leonard Flom Mabrukar et al. (2013) presented a feature extraction
acquitted this idea and discovered the fact that the Iris method based on extracting the statistical features in an
pattern differed for each person. In 1987, they asked iris by binarizing the first and second order multi-scale
John Daugman to try creating actual algorithms to Taylor coefficients using CASIA database on MATLAB.
identify the iris. These algorithms obtained from In their experiments, multi-scale Taylor-based features
Daugman in 1994 is the basis of all existing iris have pretty much immune to illumination changes. This
recognition systems and products (Daugman, 1993) and is partially due to neglecting the 0th Taylor coefficient.
(Prasad et al., 2018). Feature extraction using Multi-scale Taylor expansion
was also implemented and it yielded good results
The Application (Mabrukar et al., 2013). Hosaini et al. (2013) compared
Extensive applications for the iris system include the performance of Legendre moments, Zernike
access control to secure areas (buildings), control of moments and Pseudo-Zernike moments in feature
extraction for iris recognition. They have increased the
distributed systems, secure financial transactions, credit
moment orders until the best recognition rate was
card authentication, secure access to bank accounts,
chieved. Robustness of these moments in various orders
computer access or the database and counterterrorism.
was evaluated in presence of White Gaussian Noise.
Iris systems are deployed in many countries for airline
Numerical results indicate that recognition rate by the
crews, airport staff, national ID cards, identification of Legendre; Zernike and Pseudo-Zernike moments in
missing children, the voting system in parliamentary and higher orders are approximately identical. However,
assembly polls and many others. average computation time for feature extraction is 4.5,
Iris Recognition System 18 and 0.75 seconds respectively for the Legendre,
Zernike and Pseudo- Zernike moments of order 14. On
Two operation modes most biometric systems are the other hand, the result indicates that the Legendre
doing. Templates are added to a database by enrollment moment is more robust than the others against the white
mode and an identification mode, where a template is Gaussian noise (Sarmah and Kumar, 2013).

1735
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Sarmah and Kumar (2013) presented an algorithm Methodology


based on Legendre moment. This algorithm takes
advantage of the translation invariant property of the One of the main ways for iris recognition is to
Legendre moments. So, it can reduce the computational construct features vectors corresponding to individual
cost for iris recognition matching on a larger iris image iris images and perform iris matching based on some
database. The system performed with a test on UPOL distance measurements. The extraction of features is a
image database (Sarmah and Kumar, 2013). fundamental problem in the recognition of iris-based
Kaur et al. (2018) proposed a discrete orthogonal features that performance is greatly influenced by many
moment-based feature extraction that extracts global as well parameters in the process of feature extraction (e.g.,
as local features. Krawtchouk moments extract local spatial location, direction, central frequency). It may
features; Tchebichef moments extract global characteristics vary depending on the environmental factors to acquire
of the entire image block. Dual-Hahn moments extract both Iris image. There are many techniques used for feature
global and local features, but the performance of the extracting and merge two or more of these methods may
proposed method is evaluated on four publicly available produce a good result.
databases achieving an improved accuracy of 99.80% for In image recognition, the rotation, scaling and
CASIAIrisV4- Interval, 99.90% for IITD.v1, 100% for translation invariant properties of image moments have a
UPOL and 97.50% for UBIRIS.v2 as compared to the
high significance. Therefore, Hu presented the use of
recently proposed methods. The technique was found to be
moments for image analysis and pattern recognition
robust for NIR as well as visible images under uncontrolled
environmental conditions (Kaur et al., 2018). (Hu, 1962). Legendre moments are classical orthogonal
Al-Juburi et al. (2017) presented a new iris recognition moment which are one of widest and most commonly
system using hybrid methods. These methods were used moments used in recognition and image analysis
to extract features of tested eye images. Gabor wavelet (Oujaoura et al., 2014).
and Zernike moment were used to extract features of iris. Iris Localization and Segmentation
The proposed system was tested on CASIA-v4.0 interval
database. The results show that the proposed method has a Iris boundary detection is an important stage in the
good accuracy about 97%. PSNR is applied on the iris recognition system. Firstly, remove light reflection
training and testing iris image to measure the simmilarity inside pupil by adjusting image intensity values and
between them Al-Juburi et al. (2017). filled the holes (Fig. 2a).
Gnana et al. (2018) proposed an architecture for iris The next step is to find the pupil center and pupil
recognition and validated it on the dataset of visible
radius by the Hough transform. In our case, give the
images obtained from the University of Warsaw. They
have under took a comparative analysis using LBPH approximate lowest and highest radius of the pupil as
features and Zernike features. They infer red that the input (Fig. 2b).
proposed approach performed better with the visible Then compute iris radii to crop the iris region from
images (Gnana et al., 2018). the eye image (Fig. 2d).

Input image

(a) (b)

(d) (c)

Fig. 2: Steps of localization and segmentation

1736
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Iris Normalization (ROI) from the iris area by avoiding the regions that
occlusion may occur in.
After computing the inner and outer circles of the In the following, the five regions were imposed for
iris, the iris region is segmented out and normalized experimentation:
by a convert from polar coordinate to the Cartesian
coordinate for easy computations, as shown in (Fig. 3) a) Upper region
(Daugman, 1993). The polar coordinates are defined b) Down region
by r (the radial coordinate) and × (the angular c) Two sides region
coordinate often called polar angle) while Cartesian d) The circular region around the pupil
coordinates are defined in x and y (Equation 1) to get e) The circular region around the pupil + two sides
the iris region as matric of data: region, as shown in (Fig. 5)

x  r cos , y  r sin (1)


One of the problems in an iris recognition system is


the occlusion that happens due to eyelashes and eyelids r r
as shown in (Fig. 4). This occlusion increases the
complexity and affects the performance of matching and 
feature extraction processes.
It was done by applying the proposed approach in Fig. 3: Daugman's rubber sheet model to conversion from
many iris regions, in order to select a Region Of Interest Polar to Cartesian

Fig. 4: Sample of occlusion that happens due to eyelids and eyelashes

(a) (c)

(b) (d)

(e)

Fig. 5: Iris regions for experimentation

1737
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Features Extraction Symmetry and recursion properties of the orthogonal


basis function can be exploited to speed up the
The extraction of features remains a significant phase computation (Oujaoura et al., 2014).
in recognition system using iris. A successful recognition
rate and reduction in recognition time of two iris B. Local Quadrant Pattern (LQP)
templates mostly depend on efficient feature extraction
technique. A great deal of information about the image at Hussain and Trigges suggested LQP operator as
a higher level can be contained in small patterns of development for LBP of visual recognition. The LBP
qualitative differences in the local gray level by using method extracts a binary descriptor by creating intensity
local pattern features such as Local Binary (LBP), Local of the central pixel as the threshold of his neighborhood
Triangular (LTP) and Local Quadrant Pattern (LQP). for each pixel of an image (ul Hussain and Triggs, 2012).
Local patterns have proven very successful in visual Figure 6 gives an illustration with an example of eight
recognition tasks ranging from texture classification to neighbors equally spaced around the central pixel.
face analysis and object detection. Let Ic and Ip (p = 1,2,… 8) denote the intensity of the
central pixel and its neighbors, respectively.
A. Legendre Moments The operator is performed by the binary test as
The two-dimensional Legendre moments of order (p, follows (Equation 7):
q) with image intensity function f (x, y) are defined as:
N 1
1x  0
 2 p  1 2q  1 LBPR , N   f  I p  I c  2 p , f  x    (7)
Pp  x  Pq  y  f  x, y  dxdy, 0  0
1
Lp , q 
4  1 (2)
p 0

where p, q  0,1, 2... where, R denotes the different sampling radius and N
represents the number of the sample points equally spaced
The kernel functions P denote Legendre polynomials around the circle. A binary code with N bits is obtained
of order p: from each pixel. So will results in 2N different patterns.
Finally, we convert these patterns to a decimal value.
 pk  The LQP collects the directional geometric features
  1 2 x k  p  k ! 
p
Pp ( x)     (3) in Horizontal (H), Vertical (V), Diagonal (D) and Ant-
k 0  p  pk   pk   diagonal (A) strips of pixels and combinations of these
2 k ! !  !
  2   2   p  k  even (HVDA) (Fig. 7a).
Local Quadrant Pattern is a new method proposed
And, the recurrent formula of Legendre polynomials is: that is based on the idea of LTP (Al-Jawahry and
Mohammed, 2019).
 2 p 1 p First, the difference between the center pixel (𝐼𝑐) and
 Pp 1 ( x)  p  1 xPp ( x)  p  1 Pp 1 ( x)
 each neighbor pixels (𝐼𝑖) as (Equation 8) is calculated:

 P1 ( x)  x, (4)
 Di  Ii  Ic ; i  1,2,...8 (8)
 P0 ( x)  1
 After that, every two results Di for a specific
direction will be put in one vector accordant (Equation 9)
To compute Legendre moments from a digital image, (Rao and Rao, 2015) as shown in (Fig. 7b):
the integrals in previous (Equation 2) are replaced by
summations and the coordinates of the image must be 
p  F  Di , Di  45  ; i  1  
45 ,   0,45,90,135 (9)
normalized into [-1; 1].
Therefore, the numerical approximate form of hen, (Equation 10) is applied on each resulted value
Legendre moments, for a discrete image of N×M pixels pα from (Equation 9):
with intensity's function f(x; y) is:
3 | pi1 |  t  | pi 2 |  t  sign  pi1   sign  pi 2 
 2 p  1 2q  1 M 1 N 1 P 
Lpq 
MN
  x P  y  f x , y 
p i q j i j (5) 2 | pi1 |  t  | pi 2 |  t  sign  pi1   sign  pi 2 
i 0 j 0 Fi  (10)
1 | pi1 |  t  | pi 2 |  t  | pi1 |  t  | pi 2 |  t
where, xi and yj denote the normalized pixel coordinates 
0 else,
in the range of [-1; 1], which are given by:
Where:
2i   M  1 2 j   N  1 Fi = Result each line, i = 1,2,…4
xi  , yj  (6)
M 1 N 1 t = Specific threshold

1738
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

55 0

97 90 1 1

Threshold
Coding
Binary:
30 81 48 0 0 10100100
Decimal: 164

20 37 0 0

95 1

Fig. 6: A conventional LBP coding

21 9 15

20 8 14

19 7 13 4 3 2

6 5 4 C 1 2 3 5 C 1

16 10 22 6 7 8

17 11 23

18 12 24
(a) (b)

Fig. 7: The LQP calculate for given 7×7 pattern using HVDA geometric structure (a) for given 3×3 pattern

Finally, these values are converted to decimal number N


d  Q,V    Qi  Vi (12)
and summation to get a new value of center pixel LQP as i 1
follows (Equation 11):
row 1
Results and Discussion
LQ Prow  F
i 0
i 2
i
(11)
The proposed approach is implemented and tested on
CASIA-V4-Interval database. The developed system was
where, Fi value results from (Equation 10). established using MATLAB (version R2017a)
The feature vector consists of two parts: first, V1 programming language. The programs work under
computes Legendre moment for f(x, y) matric resulted from Windows 10 operating system, laptop Computing time
iris normalization stage. Second, V2 is resulted from calculate Average Recognition Time (ART) at second.
applying LQP method to f(x, y); then computing Legendre Table 13 shows computed ART by calculating the
moment; and finally generating V3 appended V1 by V2. average time for comparison of each testing image with
all training images from the database.
Matching In Tables 1 to 5 for left eye images and in Tables 6 to
For matching, City Block Distance is used due to get 10 for right eye images, showed that increasing of
a higher recognition accuracy ratio than other methods Legendre order lead to increase of recognition rate, in
(Sari et al., 2018). City Block Distance calculates the this case just Legendre be used for extraction feature of
absolute difference between two vectors according to image, these steps are shown in (Fig. 8 part C1). As
(Equation 12). In the proposed system the features vector shown in Fig 9 and 10, the system utilized LQP followed
is got from previously mentioned techniques: by Legendre, this approach resulted high recognition rate,

1739
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

but there are decrement in recognition rate at some Legendre orders, these steps are shown in (Fig. 8 part C2).

Enrolment images

Query image
Preprocessing

Localization and
segmentation
Preprocessing

Normalization Localization and


segmentation

Feature extraction
Normalization
C3
C2
Feature extraction
C1 LQP

LQP
Legendre Legendre
Legendre Legendre
V1 F1, F2, F3…Fm V2 F1, F2, F3…Fm
Q1 F1, F2, F3…Fm Q2 F1, F2, F3…Fm
V3 F1, F2, F3… Fm, Fm+1, Fm+2…..Fn
Q3 F1, F2, F3… Fm, Fm+1, Fm+2…..Fn

F1, F2, F3… … ..Fn Matching (city Decision


block distance)

Fig. 8: Block diagram of proposed iris recognition system

100.00
98.00
96.00
94.00
92.00
90.00
88.00
86.00
84.00
82.00
6 7 8 9 10 11 12 13

Legendre order

Legendre only Legendre with LQP Legendre append Legendre with LQP

Fig. 9: Accuracy ratio for left eye (down iris region) using Legendre moment only, Legendre with LQP and Legendre appended
Legendre with LQP (5 enroll: 5 test)

1740
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

98.00
96.00
94.00
92.00
90.00
88.00
86.00
6 7 8 9 10 11 12 13
Legendre order

Legendre only Legendre with LQP Legendre appended Legendre with LQP

Fig. 10: Accuracy ratio for right eye (down iris region) using Legendre moment only, Legendre with LQP and Legendre appended
Legendre with LQP (5 enroll: 5 test)

Table 1: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (upper iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 88.42 91.45 88.60 97.37 94.74
7th 90.53 92.11 89.47 97.37 94.74
8th 91.05 91.45 91.23 96.05 94.74
9th 91.58 93.42 91.23 96.05 94.74
10th 92.63 92.76 90.35 94.74 94.74
11th 91.58 91.45 90.35 96.05 94.74
12th 92.63 93.42 91.23 96.05 94.74
13th 92.11 93.42 89.47 94.74 94.74

Table 2: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (down iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 92.11 91.45 94.74 96.05 100.00
7th 92.11 90.79 95.61 96.05 100.00
8th 93.68 92.11 96.49 96.05 100.00
9th 94.21 92.11 96.49 96.05 100.00
10th 93.16 91.45 96.49 96.05 100.00
11th 94.74 92.11 96.49 96.05 100.00
12th 94.74 92.76 95.61 96.05 100.00
13th 94.21 92.76 94.74 94.74 100.00

Table 3: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (two sides iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 84.74 87.50 92.11 92.11 94.74
7th 88.95 91.45 94.74 93.42 94.74
8th 89.47 91.45 94.74 93.42 94.74
9th 92.63 95.39 95.61 93.42 94.74
10th 92.63 95.39 95.61 93.42 94.74
11th 92.11 93.42 96.49 94.74 94.74
12th 92.11 96.05 96.49 94.74 94.74
13th 92.63 94.74 94.74 94.74 94.74

1741
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Table 4: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (the circular region around the
pupil) using Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 81.58 82.89 86.84 86.84 86.84
7th 84.74 86.18 88.60 89.47 92.11
8th 85.26 87.50 87.72 90.79 92.11
9th 87.37 88.16 88.60 92.11 94.74
10th 88.95 91.45 93.86 93.42 92.11
11th 90.00 90.79 91.23 92.11 94.74
12th 90.00 92.11 92.98 92.11 94.74
13th 91.05 92.11 92.11 90.79 94.74

Table 5: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (the circular region around the
pupil + Sides region) using Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 78.42 79.61 82.46 84.21 94.74
7th 83.68 84.87 85.96 86.84 94.74
8th 84.21 85.53 88.60 89.47 94.74
9th 84.21 87.50 91.23 92.11 94.74
10th 84.74 86.84 90.35 90.79 92.11
11th 84.21 88.16 88.60 88.16 92.11
12th 84.21 87.50 87.72 86.84 92.11
13th 84.21 87.50 88.60 88.16 89.47

Table 6: Recognition accuracy ratio when enrolment set is changed and testing set for right eye image (upper iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 85.00 84.72 88.89 91.67 94.44
7th 88.89 88.89 90.74 91.67 88.89
8th 90.56 90.97 90.74 93.06 94.44
9th 92.22 92.36 91.67 94.44 94.44
10th 92.22 92.36 92.59 93.06 91.67
11th 93.89 93.75 93.52 95.83 97.22
12th 92.78 93.06 93.52 94.44 94.44
13th 93.89 95.14 94.44 94.44 94.44

Table 7: Recognition accuracy ratio when enrolment set is changed and testing set for right eye image (down iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 92.22 93.06 91.67 91.67 100.00
7th 91.11 90.28 90.74 93.06 97.22
8th 93.89 94.44 93.52 94.44 97.22
9th 93.89 94.44 93.52 94.44 94.44
10th 92.78 92.36 90.74 91.67 94.44
11th 92.78 93.06 91.67 91.67 97.22
12th 92.78 93.06 91.67 91.67 97.22
13th 93.89 95.14 94.44 95.83 97.22

Table 8: Recognition accuracy ratio when enrolment set is changed and testing set for right eye image (sides iris region) using
Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 88.89 88.19 94.44 94.44 94.44
7th 90.56 89.58 94.44 94.44 94.44
8th 90.56 89.58 94.44 97.22 97.22
9th 90.00 89.58 94.44 95.83 94.44
10th 91.11 90.28 92.59 93.06 97.22
11th 92.22 91.67 94.44 93.06 94.44
12th 92.22 91.67 95.37 94.44 97.22
13th 92.78 91.67 94.44 93.06 94.44

1742
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Table 9: Recognition accuracy ratio when enrolment set is changed and testing set for right eye image (the circular region around the
pupil) using Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 80.56 79.17 81.48 84.72 88.89
7th 82.78 81.94 85.19 90.28 94.44
8th 85.56 85.42 88.89 93.06 97.22
9th 87.22 86.81 87.96 90.28 94.44
10th 88.89 89.58 88.89 93.06 100.00
11th 88.33 89.58 90.74 93.06 100.00
12th 89.44 89.58 89.81 91.67 100.00
13th 89.44 89.58 89.81 90.28 97.22

Table 10: Recognition accuracy ratio when enrolment set is changed and testing set for right eye image ( the circular region around
the pupil + Sides region ) using Legendre moment only
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 81.67 83.33 85.19 86.11 88.89
7th 83.33 85.42 89.81 90.28 94.44
8th 85.00 86.11 88.89 91.67 94.44
9th 86.11 86.81 87.96 91.67 91.67
10th 86.11 88.19 88.89 91.67 91.67
11th 86.11 86.11 87.96 90.28 88.89
12th 87.22 86.81 87.04 90.28 91.67
13th 86.67 86.11 87.04 90.28 88.89

Table 11: Recognition accuracy ratio when enrolment set is changed and testing set for left eye image (best ROI (down)) using
Legendre moment only appended by Legendre moment with LQP.
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 95.26 95.39 99.12 97.37 100.00
7th 96.84 96.71 98.25 97.37 100.00
8th 97.37 96.71 98.25 97.37 100.00
9th 96.84 96.05 98.25 97.37 100.00
10th 97.89 98.03 98.25 97.37 100.00
11th 97.37 97.37 98.25 97.37 100.00
12th 97.89 97.37 98.25 97.37 100.00
13th 97.89 97.37 98.25 97.37 100.00

Table 12: Recognition accuracy ratio when change enrolment set and testing set for right eye image (best ROI (two sides)) using
Legendre moment only appended by Legendre moment with LQP.
Legendre order 5 enroll: 5 test 6 enroll: 4 test 7 enroll: 3 test 8 enroll: 2 test 9 enroll: 1 test
6th 94.44 94.44 97.22 98.61 97.22
7th 96.11 94.44 97.22 97.22 97.22
8th 96.11 95.14 97.22 97.22 97.22
9th 96.67 95.14 97.22 97.22 97.22
10th 97.22 96.53 97.22 97.22 97.22
11th 96.67 95.83 97.22 97.22 97.22
12th 97.22 96.53 98.15 98.61 100.00
13th 97.22 96.53 98.15 98.61 100.00

Table 13: Explain features count for each Legendre order and average recognition time (ART) at second using Legendre moment
only and Legendre appended Legendre with LQP for (best ROI (down))
ART Legendre only ART for Legendre with LQP
----------------------------------------------------------- ----------------------------------------------------------------
Legendre order Feature count Left eye Right eye Feature count Left eye Right eye
6th 25 0.65 0.69 50 0.81 0.80
7th 33 0.75 0.73 66 0.97 0.96
8th 43 0.90 0.87 84 1.23 1.23
9th 52 1.12 1.10 104 1.69 1.70
10th 63 1.49 1.47 126 2.36 2.37
11th 75 2.10 2.08 150 3.72 3.50
12th 88 3.09 2.97 176 5.43 5.48
13th 102 4.54 4.65 204 8.48 8.39

1743
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

As shown with Tables 11 and 12 there three steps Conclusion and Future Work
for feature extractor, first step the image enrolled to
 The best order for Legendre is 10th, where it gives
Legendre algorithm to produce vector of features
high recognition rate as well as a resistance to
(named V 1). Second one, the image enrolled to LQP variation of images
algorithm to produce array of coefficients that  The best result, have gotten with down part of left eye
enrolled as inputs to the Legendre to produce vector while the high recognition rate has been satisfied with
of features (V 2). Finally, fusion stages has been two sides of right eye as shown with Fig 11 and 12
applied by append vector of V 1 to V2 to produce the  Fusion of features of Legendre and LQP gave the
final vector of all features (named V 3). Best results best results with minima errors
with high recognition rate have been satisfied based  In the future, we are going to apply the proposed
on this fusion. approach with other databases

100.00

98.00
96.00
94.00
92.00

90.00
88.00

86.00
6 7 8 9 10 11 12 13

Down iris region


Upper iris region
Sides iris region
The circular region around the pupil
The circular region around the pupil + sides region

Fig. 11: Accuracy ratio for left eye to different regoins using Legendre only appended Legendre with LQP (5 enroll: 5 test)

98.00
96.00
94.00
92.00
90.00
88.00
86.00
6 7 8 9 10 11 12 13
Down iris region
Upper iris region
Sides iris region
The circular region around the pupil
The circular region around the pupil + sides region

Fig. 12: Accuracy ratio for right eye to different regoins using Legendre only appended Legendre with LQP (5 enroll: 5 test)

1744
Asaad Noori Hashim and Bushraa Mahdi Al-Hashimi / Journal of Computer Science 2019, 15 (12): 1734.1745
DOI: 10.3844/jcssp.2019.1734.1745

Author’s Contributions Hu, M.K., 1962. Visual pattern recognition by moment


invariants. IRE Trans. Inform. Theory, 8: 179-87.
Asaad Noori Hashim: Has contributed to the
DOI: 10.1109/TIT.1962.1057692
analysis and simulated of the proposed algorithm. Also,
he contributed to the write-up and language revision. Jain, B., M.K. Gupta and J. Bharti, 2012. Efficient iris
Bushraa Mahdi Al-Hashimi: Has contributed to recognition algorithm using method of moments.
review the literature, identified the research gap and Int. J. Artificial Intell. Applic.
revising the results. Kaur, B., S. Singh and J. Kumar, 2018. Robust iris
recognition using moment invariants. Wireless
Ethics Personal Commun., 99: 799-828.
DOI: 10.1007/s11277-017-5153-8
This paper is genuine and includes unpublished
Mabrukar, S.S., N.S. Sonawane and J.A. Bagban, 2013.
material. The corresponding author confirms that the
Biometric system using Iris pattern recognition. Int.
coauthor has read and approved the manuscript and no
ethical issues involved. J. Innovat. Technol. Explor. Eng., 2: 54-57.
Oujaoura, M., B. Minaoui and M. Fakir, 2014. Image
References annotation by moments. Moments Moment
Invariants-Theory Applic., 1: 227-252.
Al-Jawahry, H.M. and H.R. Mohammed, 2019. Local DOI: 10.15579/gcsr.vol1.ch10, GCSR Vol
Quadrant Pattern with Co-occurrence Matrix (LQP- Prasad, M.R., T.C. Manjunath, M.D.A. Bhyratae and N.
CM): Hybrid method for image classification and Kumar, 2018. Design and development of IRIS
feature extraction. J. Eng. Applied Sci., 14: 2171-2176. biometric systems-a exhaustive review summary
DOI: 10.3923/jeasci.2019.2171.2176 and problem formulation. Int. J. Res., 7: 1-14.
Al-Juburi, B.J.A., H.R. Mohammed and A.N.H. Al- Rao, L.K. and D.V. Rao, 2015. Local quantized
Shareefi, 2017. Iris recognitions identification and
extrema patterns for content-based natural and
verification using hybrid techniques. Res. J. Applied
texture image retrieval. Human-Centric Comput.
Sci. Eng. Technol., 14: 473-482.
Inform. Sci., 5: 26-26.
DOI: 10.19026/rjaset.14.5150
DOI 10.1186/s13673-015-0044-z
Daugman, J.G., 1993. High confidence visual
Sari, Y., M. Alkaff and R.A. Pramunendar, 2018. Iris
recognition of persons by a test of statistical
recognition based on distance similarity and PCA.
independence. IEEE Tran. Pattern Anal. Mach.
Proceedings of the 4th International Conference on
Intell., 15: 1148-1161. DOI: 10.1109/34.244676 Engineering, Technology and Industrial
Gnana, P.R., V.M. Ravi and K.M. Sriraam, 2018. Iris Application, (TIA’ 18), 020044-020044.
recognition using visible images based on the fusion of DOI: 10.1063/1.5042900
daugman's approach and Hough transform. Sarmah, A. and C.J. Kumar, 2013. Iris verification using
Proceedings of the 2nd International Conference on Legendre moments and KNN classifier. Int. J. Eng.
Biometric Engineering and Applications, May 16-18, Sci. Invent., 2: 52-59.
ACM, pp: 24-29. DOI: 10.1145/3230820.3230825 ul Hussain, S. and B. Triggs, 2012. Visual recognition
Hosaini, S.J., S. Alirezaee, M. Ahmadi and S.V.A.D. using local quantized patterns. Proceedings of the
Makki, 2013. Comparison of the Legendre, Zernike 12th European Conference on Computer Vision,
and Pseudo-Zernike moments for feature extraction in Oct. 07-13, Springer-Verlag Berlin, pp: 716-729.
iris recognition. Proceedings of the 5th International
Conference and Computational Intelligence and
Communication Networks, Sept. 27-29, IEEE Xplore
Press, Mathura, India, pp: 225-228.
DOI: 10.1109/CICN.2013.54

1745

You might also like