Iris Recognition: An Emerging Security Environment For Human Identification

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

ISSN:2229-6093

M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

Iris recognition: An emerging security environment for human identification


M.Daris Femila #1, A. Anthony Irudhayaraj #2
1
Assistant Professor, Department Of Computer Science, SRM Arts and Science College, Chennai, India
2
Dean-Information Technology, Aarupadai Veedu Institute Of Technology, Chennai, India
1
darisbennison@gmail.com
2
anto_irud@hotmail.com

Abstract

Unlike other biometrics such as fingerprints and face, the identification and personal verification technologies is
distinct aspect of iris comes from randomly distributed becoming apparent. Biometrics involves using the different
features. This leads to its high reliability for personal parts of the body, such as the fingerprint or the eye, as a
identification and at the same time, the difficulty in password or form of identification. Currently, in crime
effectively representing such details in an image. Iris is a Investigations fingerprints from a crime scene are being used
protected internal organ whose random texture is stable to find a criminal. However, biometrics is becoming more
throughout life, it can serve as a kind of living password that public. Iris scans are used in United Kingdom at ATM's
one need not remember but one always carries along. instead of the normal codes. In Andhra Pradesh Iris
Because the randomness of iris patterns has very high recognition is being used to issue house hold ration cards.
dimensionality, recognition decisions are made with
confidence levels high enough to support rapid and reliable Practically all biometric systems work in the same
exhaustive searches through national-sized databases. Iris manner. First, a person is enrolled into a database using the
recognition has shown to be very accurate for human specified method. Information about a certain characteristic of
identification. This paper proposes a technique for iris the human is captured. This information is usually placed
pattern extraction utilizing the graph cut method where the through an algorithm that turns the information into a code
pupilary boundary of the iris is determined. The limbic that the database stores. When the person needs to be
boundary is identified by adaptive thresholding method. The identified, the system will take the information about the
iris normalization was invariant for translation, rotation and person again, translates this new information with the
scale after mapping into polar coordinates. The proposed algorithm, and then compares the new code with the ones in
method has an encouraging performance, success rate of the database to discover a match and hence, identification.
localization and normalization and reduces the system
operation time. The proposed method involves Graph cut Biometrics works by unobtrusively matching
method, Adaptive thresholding, Normalization modules. patterns of live individuals in real time against enrolled
records. Leading examples are biometric technologies that
Keywords: iris, pattern, identification, thresholding, pupilary, recognize and authenticate faces, hands, fingers, signatures,
normalization irises, voices, and fingerprints. Biometric data are separate and
distinct from personal information. Biometric templates
1. Introduction. cannot be reverse-engineered to recreate personal information
and they cannot be stolen and used to access personal
Biometrics is the science of measuring physical information.
properties of living beings. It is a collection of automated
methods to recognize an individual person based upon a
physiological or behavioral characteristic. The characteristics 2. Iris
measured are face, fingerprints, hand geometry, handwriting,
iris, retinal, vein, voice etc. In present technology scenario The iris has been historically recognized to possess
biometric technologies are becoming the foundation of an characteristics unique to each individual. In the mid 1980s,
extensive array of highly secure identification and personal two ophthalmologists „Dr. Leonard Flom‟ and „Aran Safir‟
verification solutions. As the level of security breaches and proposed the concept that no two irises are alike[6]. They
transaction fraud increases, the need for highly secure researched and documented the potential of using the iris for

IJCTA | NOV-DEC 2011 3023


Available online@www.ijcta.com
ISSN:2229-6093
M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

identifying people and were awarded a patent in 1987. Soon the iris. The externally visible surface of the multilayered iris
after, the intricate and sophisticated algorithm that brought the contains two zones, which often differ in color. An outer
concept to reality and it was developed by Dr. John Daugman ciliary zone and an inner pupillary zone, and these two zones
and patented in 1994[3]. are divided by the collarette – which appears as a zigzag
pattern.
2.1. Features of the Iris
The iris is the plainly visible, colored ring that
The human iris is rich in features, can be used to surrounds the pupil. It is a muscular structure that controls the
quantitatively to distinguish one eye from another. The iris amount of light entering the eye, with intricate details that can
contains many colleagues fibers, contraction furrows, coronas, be measured, such as striations, pits, and furrows. The iris is
crypts, color, serpentine, vasculature, striations, freckles, rifts, not to be confused with the retina, which lines the inside of the
and pits. Measuring the patterns of these features and their back of the eye. Figure1 shows human eye characteristics. No
spatial relationships to each other provides other quantifiable two irises are alike. There is no detailed correlation between
parameters for identification process. The statistical analyses the iris patterns of even identical twins, or the right and left
indicated that the Iridian Technologies IRT process eye of an individual. The amount of information that can be
independent measures of variation to distinguish one iris from measured in a single iris is much greater than fingerprints, and
another. It allows iris recognition to identify persons with an the accuracy is greater than DNA.
accuracy with a magnitude greater than any other biometric
systems. Iris: This is the colored part of the eye: brown, green, blue,
etc. It is a ring of muscle fibers located behind the cornea and
2.2.Uniqueness of the Iris in front of the lens.

The iris is unique due to the chaotic morphogenesis Pupil: Pupil is the hole in the center of the iris that light
of that organ. Dr. John Daugman stated that “An advantage passes through. The iris muscles control its size.
the iris shares with fingerprints is the chaotic morphogenesis
of its minutiae. The iris texture has chaotic dimension Sclera: The sclera is the white, tough wall of the eye. It along
because its details depend on initial conditions in embryonic with internal fluid pressure keeps the eyes shape and protects
genetic expression; yet, the limitation of partial genetic its delicate internal parts.
penetrance (beyond expression of form, function, color and
general textural quality), ensures that even identical twins
have uncorrelated iris minutiae. Thus the uniqueness of every
iris, including the pair possessed by one individual, parallels
the uniqueness of every fingerprint regardless of whether there
is a common genome”.

2.3.Stability of the recognition

An iris is not normally contaminated with foreign


material, and human instinct being what it is, the iris, or eye,
is one of the most carefully protected organs in one‟s body. In
this environment, and not subject to deleterious effects of
aging, the features of the iris remain stable and fixed from
about one year of age until death. The human eye has Figure 1: Structure of a human eye
physiological properties that can be exploited to impede use of
images and artificial devices to spoof the system.The iris are
perforated close to its centre by a circular aperture known as
Recently,Du et al. designed a local texture analysis
the pupil. The function of the iris is to control the amount of
algorithm to calculate the local variances of iris images and
light entering through the pupil, and this is done by the
generate a one dimensional iris signature which relaxed the
sphincter and the dilator muscles, which adjust the size of the
requirement of entire whole iris for identification and
pupil. The average diameter of the iris is 12 mm, and the pupil
recognition[7][8].However, all of these algorithms assume
size can vary from 10% to 80% of the iris diameter.
that a circular iris pattern has been successfully extracted from
a captured image but these algorithms are very complex, takes
The iris consists of a number of layers, the lowest is
longer time for code extraction and code matching from the
the epithelium layer, which contains dense pigmentation cells.
database. But this paper proposes a new and easy methods for
The stromal layer lies above the epithelium layer, and
iris localization and iris normalisation when compared to
contains blood vessels, pigment cells and the two iris muscles.
other algorithms which are used for iris recognition.
The density of stromal pigmentation determines the colour of

IJCTA | NOV-DEC 2011 3024


Available online@www.ijcta.com
ISSN:2229-6093
M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

3. Methodology.
This paper deals with the generation of stable key
from iris image and it is carried over using iris database. The
input image was subjected to segmentation to detect the two
circles i.e., iris/sclera boundary and the iris/pupil boundary.
The resultant image is normalized to produce iris regions. The
proposed method involves three modules namely i)Graph cut
method, ii)Adaptive thresholdingand iii) Normalization.

3.1. Introduction to Iris Recognition

Iris recognition technology combines computer


vision, pattern recognition, statistical inference, and optics.
The iris is an externally visible, yet protected organ whose
unique epigenetic pattern remains stable throughout adult life.
These characteristics make it very attractive for use as a
biometric for identifying individuals. Image processing
techniques can be employed to extract the unique iris pattern
from a digitized image of the eye, and encode it into a
biometric template, which can be stored in a database. This
biometric template contains an objective mathematical
Figure 2: The overall methodology flowchart
representation of the unique information stored in the iris, and
allows comparisons to be made between templates.
4.1. Iris localization
When a subject wishes to be identified by an iris Iris localization involves Pupilary Boundary
recognition system, their eye is first photographed, and then a Detection and Limbic Boundary Detection. The methods used
template created for their iris region. This template is then for the detection of PupilaryBoundary(Inner Boundary) is
compared with the other templates stored in a database until Graph cut method . Even under the ideal imaging conditions
either a matching template is found and the subject is the pupil boundary is not a perfect circle and in many cases a
identified, or no match is found and the subject remains small area of the pupil is taken as the iris area by traditional
unidentified. Iris recognition allow user to hands-free methods. Although the captured area is small, considering the
operation in application. Iris recognition has highest proven fact that most of the iris patterns exist in the collarette area -
accuracy, had no false matches in over two million cross which is a small area surrounding the pupil - the error of
comparison, according to Biometric Testing Final Report. It inaccurate segmentation will be significant. Therefore a
allow high speed also for large populations, just look into a method to accurately detect the pupil boundary is highly
camera for a few seconds. The iris is stable for each individual required. Graph cut method introduced by Y. Boykov [16] is
throughout his or her life and do not change with age. The an efficient segmentation method based on energy
weaknesses are Intrusive, High cost, Contact lenses , minimization. This method considers the image as a graph and
sunglasses , optical glasses. searches for a cut in the graph that has minimum energy. The
min-cut/max-flow energy minimization method is commonly
4. Steps involved used for the purpose of energy minimization [15]. Using the
graph cut based iris segmentation solves the problem of off
The first step towards achieving a homogenous angle imaging and also the non circularity of the pupil that is
region is by setting the values of pixels below 60 and above one of main sources of error in iris recognition known as pupil
240 are equal to 255. By doing this we can easily identify the error.
IRIS boundaries. The purpose for adjusting these values is to
reduce the effect of specularities that may be present in the The graph cut theory that is used to minimize the
pupil. The input is a Captured Eye image and the output is energy function defined to segment the input eye image.
Homogenized Image. Consider a graph G=(V,E)in which V is the set of nodes and E
is the set of edges that construct the graph. G is called an
undirected graph if the change in the cost function from one
node to another, is direction independent. An example of a

IJCTA | NOV-DEC 2011 3025


Available online@www.ijcta.com
ISSN:2229-6093
M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

graph is shown in figure 3. We define two terminals for the representing the sink or background pixels. The goal is to find
graph – a source (s) and a sink (t). These two terminals are the a cut or a set of edges that separates the object and the
main nodes in the graph and are defined by the user. The background sets in a way that the cut has the minimum cost.
maximum cost (weight) in the graph will be given to these To perform the minimization process the cost or energy
terminal nodes. Nodes other than the terminals are assigned function is defined. The general form of the energy function is
nonnegative weights being less than or equal to the weights of as follows
the two terminals. Subset C (C ʗ E) is called a cut if it can
divide V into two separate sets S and (where T is equal to V -
S) in a way that s ϵ S and t ϵ T (s and t are the two terminals
of the graph). The cost of a cut is defined as the sum of the
costs of its edges. The minimum cut problem or the problem The Dp cost is defined as
of minimizing the cost function is performed by finding the
cut with minimum cost or energy. Cost is defined as

Where, Max is the large positive value that is assigned to sink


and source terminals during the initial labeling process. The
Where ei,j is the edge or link connecting the two vertices i and cost function Vp,q is demonstrated as
j and wi,j is the weight associated with this edge. Several
methods [15] have been introduced to solve the minimum cut
problem in polynomial time. To segment an image using
graph cut method, the pixels of the image are considered as
the nodes of the graph. The edges represent the relationship
between neighboring nodes or pixels and a cut represents a
partitioning of the image constructed via these nodes. Finding
Where Ip is the intensity value of the pixel p and dist(p,q) is
a minimum cut for the image graph results in a partitioning of
the distance between pixel p and pixel q. The term σ is the
the image which is optimal in terms of the defined cost
variance of pixel intensity values inside the object. In the
function for the cut.
proposed method a σ value per cluster is calculated for the
whole image and then these values will be used in the rest of
the process.

Figure 3: An example of a 2D graph showing two terminals named by


source” and “sink”, and the cut separating the regions. Thick lines connect
the terminal to the pixels of the same region, while the thin lines show its
connection to the pixels from the other region (only a few of the links
are shown in the figure)

To segment the image, the terms of a graph such as Figure 4:Original image
the vertices, source, etc. are defined for the image. The pixels
of the image are defined as the vertices of the graph. All The described graph cut segmentation algorithm is
neighboring pairs of pixels of the image are assumed to be applied to eye images that are taken for iris recognition
connected to each other with a link and these links are called purposes to segment the image and detect the pupil boundary
the edges. Capacity of each link is defined in terms of the precisely. Knowing the fact that pupil is a dark region in any
sharpness of the edge existing between the pixels. The eye, one can assume the gray level of its pixels to be close to
sharpness of an edge is defined by the difference between zero. Since all regions of the image, except for the eyelashes
their intensity values. The label O or “object” can be assigned and the pupil, have high gray values, there is need for the
to a set of pixels to specify the source or object terminal and effect eyelashes in the picture. The pixels with small gray
the label B or “background” can be assigned to another set level values are marked as potential vertices to be labeled as

IJCTA | NOV-DEC 2011 3026


Available online@www.ijcta.com
ISSN:2229-6093
M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

the source or object vertices of the graph. To detect and


eliminate the pixels related to the eyelashes from the pupil
pixels, the method given in [17] is applied. This method uses
the difference between the pixel intensity value and the mean
of the gray level of its neighboring pixels to decide whether it
Figure 6:Normalized image
is an eyelash pixel or not.

By using Adaptive thresholding technique we can 4.3.Pattern Recognition


determine the limbic boundary Note that the iris texture is
brighter than the sclera. By finding the difference between Texture near the pupilary boundary and limbic
these two regions we find out the limbic boundary value . So boundary inside the iris has some errors. So we take the
that we can recognize limbic boundary .we get midpoints of a middle row of iris in order to overcome the errors. We convert
limbic boundary and radius of the limbic. By calculating the the middle row of bits in to hexadecimal code. In our project
distance between each pixel coordinates of image and the probability of occurring secret code is nearly 1690 . We
midpoint coordinates of limbic, comparing with original consider 360 bits of middle row of an enhanced image.
radius of limbic we get radius of limbic boundary. Convert these bits into hexadecimal code.

F=1001 1110 1100… 0110 1100 0101 1101 1011

9EC6C5DB

(360 Bits code)-----(90 Hexadecimal code)

4.4.Pattern Matching

Hexadecimal code is taken from database and


converted into bits. The comparison is done by computing the
HAMMING DISTANCE between the two codes. The
Hamming distance between an Iris code A and another code B
is given by
Figure 5:Localized image

4.2.Iris normalization

Iris normalization and enhancement involves


4.5.Hamming Distance
converting the polar coordinate system to Cartesian coordinate
system. Then converting the iris region from Cartesian
Given two patterns A and B the sum of disagreeing
coordinates to the normalized non-concentric polar
bits(sum of the exclusive–OR between) divided by N the total
representation is modeled as I(x(r,ø),y(r, ø))→I(r, ø)
number of bits in the pattern. If two patterns are derived from
With
the same iris, the hamming distance between them will be
x(r, ø) → (1-r)Xp(ø)+rXi(ø)
close to 0.0 then it is accepted or else rejected.
y(r, ø) →(1-r)Yp(ø)+rYi(ø)

where I(x,y) is the iris region images (x,y) are the orginal
Cartesian coordinates (r, ø) are the corresponding normalized 5. Conclusion
polar coordinates and Xp,Yp and Xi,Yi are the coordinates of
pupil and iris boundary along o direction . Iris boundaries are recognized by using simple
methods and the less complex and faster algorithms than
Note : ø varies from 0 t0 360 previous algorithms and it eliminates pupilary noises and
r varies from 0 to Ri-Rp where reflections. Homogenization removes specularities of the
pupil., A method based on graph cuts was presented to
Ri=Radius of Iris, Rp=radius of pupil segment the pupil region in an eye image for iris recognition
purposes and thus we can recognize pupilary boundary (inner
boundary) accurately.

IJCTA | NOV-DEC 2011 3027


Available online@www.ijcta.com
ISSN:2229-6093
M Daris Femila et al, Int. J. Comp. Tech. Appl., Vol 2 (6), 3023-3028

Adaptive threshold method can find the limbic radius [10] W.W.Boles and B.Boashash ,”A Human Identification
and limbic mid-point. By solving these parameters in circle Technique Using Images of the Iris and Wavelet Transform”, Signal
equation, we can recognize limbic boundary (outer boundary) Processing, IEEE Transactions on, Vol.46,No.4,1998.
accurately. The region between inner and outer boundary is
[11] Y.-P. Huang, S.-W. Luo E.-Y. Chen, ”An Efficient Iris
iris, it is in the polar form and converted into linear form by Recognition System”, Proceedings of the First International
converting the polar coordinate system to cartesian coordinate Conference on Machine Learning and Cybernetics
system, then converting the iris region from Cartesian
coordinates to the normalized nonconcentric polar [12]Ma, Y. Wang, and T. Tan, “Iris Recognition Using Circular
representation we get normalized image. By doing Symmetric Filters”, Pattern Recognition, 16th International
enhancement, the logical image with 360 in length and Conference on Vol.2,pp.414-417,2002.
breadth is the difference between the outer and inner boundary
is produced. The texture near the limbic and pupilary [13] Daugman, John G.”Biometric Personal Identification System
boundary inside the iris has some noises due to eyelashes and Based on Iris Analysis”, U.S.Patent 5,291,560.1994.
eyelids, by taking the middle row of the enhanced image a [14] Gonzalez, Rafael C., R. E. Woods, and Steven L. Eddins.
secret code is extracted from it. The secret code is converted “Digital Image Processing using MATLAB.” Prentice Hall,Upper
into Hexadecimal code of length 90. Hamming code distance Saddle River, NJ.20044
is being used for pattern matching. It can give the 1690
different iris codes . It can overcome the noises caused by [15] Boykov, Y. and Kolmogorov, V., "An Experimental
pupil in the image. In this graph cut method only the gray Comparison of Min-Cut/Max-Flow Algorithms for Energy
level information of the images was used to perform the Minimization in Vision," in IEEE Trans. on Patt. Anal. and Machine
segmentation. For future work the method can be expanded to Intel. (PAMI), vol. 26, no.9, pp.1124-1137, Sept. 2004.
evaluate color images.
[16] Boykov, Y. and Jolly, M.P., "Interactive Graph Cuts for
Optimal Boundary and Region Segmentation of Objects in N-D
Images," Proc. Intl. Conf. on Comp. Vision, Vol. I, pp. 105-112,
2001.
REFERENCES
[17] H. Mehrabian, A. Poursaberi, B. N. Araabi, “Iris boundary
[1] R.C.Gonzalez, Richard E. Woods, Digital Image Processing, detection for iris recognition using Laplacian mask and Hough
Pearson Education. transform”, 4th Machine Vision and Image Processing conference,
Iran, 2007.
[2] J.Forrester , A. Dick , P.McMenamin , and W.lee , The Eye :
Basic Scieences in Practice , W B Saunder , London , 2001. [18].A. J. Bron, R. C. Tripathi, and B. J. Tripathi. “Wolff‟s Anatomy
of the Eye and Orbit, 8th Edition.” Chapman and Hall
[3] J.Daugman ,” How Iris Rcognitioin Works “ ,IEEE Transaction Medical,London.1997.
on Circuits and System for Video Technology , Vol . 14, N0 . 1,pp .
21-30 ,2004

[4] Y.Du , R. W. Ives , and D . M. Etter , “ Iris recognition “, a


chapter on biometrics the Electrical Engineering Handbook , 3rd
Edition , Boca Raton ,FL CRC press , 2004 (in press)

[5] J.D.Woodward , N.M.Orlans ,and P.T.Higgins , Biometrics , the


McGraw- Hill Company , California , U.S.A ,2002.

[6] L.Flom and A.Safir United states patent No .4,641,349 Iris


Recognition System , Washington D.C.:U.S.Government printing
office .

[7] Y.Du , R. W. Ives , D . M. Etter ,T.B.Welch ,and C-Ichang , “One


dimensional approach to Iris recognition “ ,Proceedings of SPIE
Volume 5405 ,pp.237-247,Apr.,2004

[8] Y.Du , R. W. Ives , D . M. Etter ,T.B.Welch “use of One


Dimensional Approach to Iris signature to rank Iris pattern
similarities ,” submitted to Optical Engineering ,2004 .

[9] R.P.Wildes ,J.C.Asmuth , G.l.Green, S.C.Hsu , R.J. Kolczynski ,


J.R.Matey, and S.E.McBride , “A Machine vision system for Iris
recognition “,Mach vision application , vol 9,1-8,1996 .

IJCTA | NOV-DEC 2011 3028


Available online@www.ijcta.com

You might also like