Biometric Authentication
Biometric Authentication
Biometric Authentication
BIOMETRIC AUTHENTICATION
Submitted in partial fulfilment
for the award of the degree of
Bachelor of Technology
In
Engineering
SANJEEVAN COLLEGE OF
ENGINEERING AND TECHNOLOGY
INSTITUTE, PPANHALA.
Academic Year (2023-2024)
Semester: Second.
PRN No:23063151242070.
Roll No: 108
CERTIFICATE
He/ She has been undergone the requisite work as prescribed by Sanjeevan
Engineering & Technology Institute, Panhala.
Place:-
Date:-
AKNOWLEDGEMENT
This is opportunity to express my heartfelt words for the people who were part of
this seminar in numerous ways, people who gave me un ending support right from
beginning of the seminar.
I want to give sincere thanks to the principal Dr. Sanjeev Jain for his valuable
support.
I extend my thanks to Prof .Sudhir P. Nangare Head of the Department for his
Constant support.
I express my deep sense of gratitude to vice Principal Dr .S.G Sapate for
continuous cooperation encouragement and esteemed guidance by Prof.
Ni.G.Khan
Yours Sincerely,
BHETE OMKAR HANMANTRAO
To achieve more reliable verification or identification we should use something that really
characterizes the given person. Biometrics offer automated methods of identity verification or
identification on the principle of measurable physiological or behavioral characteristics such as a
fingerprint or a voice sample. The characteristics are measurable and unique. These characteristics
should not be duplicable, but it is unfortunately often possible to create a copy that is accepted by
This report actually gives an overview of what is biometric system and a detail overview
2. IRIS RECOGNITION
2.1 INTRODUCTION
2.3.2 NORMALIZATION
4. CONCLUSION
1 INTRODUCTION
Biometrics are automated methods of identifying a person or verifying the identity of a person
based on a physiological or behavioral characteristic. Biometric-based authentication is the
automatic identity verification, based on individual physiological or behavioral characteristics, such
as fingerprints, voice, face and iris. Since biometrics is extremely difficult to forge and cannot be
forgotten or stolen, Biometric authentication offers a convenient, accurate, irreplaceable and high
secure alternative for an individual, which makes it has advantages over traditional cryptography-
based authentication schemes. It has become a hot interdisciplinary topic involving biometric and
Cryptography. Biometric data is personal privacy information, which uniquely and permanently
associated with a person and cannot be replaced like passwords or keys. Once an adversary
compromises the biometric data of a user, the data is lost forever, which may lead to a huge
financial loss. Hence, one major concern is how a person’s biometric data, once collected, can be
protected.
Daugman algorithms are owned by Iridian technologies and the process is licensed to several
other Companies who serve as System integrators and developers of special platforms exploiting
iris recognition in recent years several products have been developed for acquiring its images over
a range of distances and in a variety of applications. One active imaging system developed in 1996
by licensee Sensar deployed special cameras in bank ATM to capture IRIS images at a distance of
up to 1 meter. This active imaging system was installed in cash machines both by NCR Corps and
by Diebold Corp in successful public trials in several countries during I997 to 1999. a new and
smaller imaging device is the low cost “Panasonic Authenticam” digital camera for handheld,
desktop, e-commerce and other information security applications. Ticket less air travel, check-in
and security procedures based on iris recognition kiosks in airports have been developed by eye
ticket. Companies in several, countries are now using Daughman’s algorithms in a variety of
products.
Face Recognition: The identification of a person by their facial image can be done in a number of
different ways such as by capturing an image of the face in the visible spectrum using an
inexpensive camera or by using the infrared patterns of facial heat emission. Facial recognition in
visible light typically model key features from the central portion of a facial image. Using a wide
assortment of cameras, the visible light systems extract features from the captured image(s) that
do not change over time while avoiding superficial features such as facial expressions or hair.
Speaker Recognition:. Speaker recognition uses the acoustic features of speech that have been
found to differ between individuals. These acoustic patterns reflect both anatomy and learned
behavioral patterns . This incorporation of learned patterns into the voice templates has earned
speaker recognition its classification as a "behavioral biometric." Speaker recognition systems
employ three styles of spoken input: text-dependent, text-prompted and text independent. Most
speaker verification applications use text-dependent input, which involves selection and
enrollment of one or more voice passwords. Text-prompted input is used whenever there is
concern of imposters. The various technologies used to process and store voiceprints include
hidden Markov models, pattern matching algorithms, neural networks, matrix representation and
decision trees.
Iris Recognition: This recognition method uses the iris of the eye which is the colored area that
surrounds the pupil. Iris patterns are thought unique. The iris patterns are obtained through a
video-based image acquisition system. Iris scanning devices have been used in personal
authentication applications for several years. Systems based on iris recognition have substantially
decreased in price and this trend is expected to continue. The technology works well in both
verification and identification modes.
Hand and Finger Geometry: To achieve personal authentication, a system may measure either
physical characteristics of the fingers or the hands. These include length, width, thickness and
surface area of the hand. One interesting characteristic is that some systems require a small
biometric sample. It can frequently be found in physical access control in commercial and
residential applications, in time and attendance systems and in general personal authentication
applications.
Signature Verification: This technology uses the dynamic analysis of a signature to authenticate a
person. The technology is based on measuring speed, pressure and angle used by the person when
a signature is produced. One focus for this technology has been e-business applications and other
applications where signature is an accepted method of personal authentication.
2. IRIS RECOGNITION
2.1 INTRODUCTION
Iris recognition systems, in particular, are gaining interest because the iris’s rich texture offers a
strong biometric clue for recognizing individuals. Located just behind the cornea and in front of
the lens, the iris uses the dilator and sphincter muscles that govern pupil size to control the amount
of light that enters the eye. Near-infrared (NIR) images of the iris’s anterior surface exhibit complex
patterns that computer systems can use to recognize individuals. Because NIR lighting can
penetrate the iris’s surface, it can reveal the intricate texture details that are present even in dark-
colored irises. The iris’s textural complexity and its variation across eyes have led scientists to
postulate that the iris is unique across individuals. Further, the iris is the only internal organ readily
visible from the outside. Thus, unlike fingerprints or palm prints, environmental effects cannot
easily alter its pattern. An iris recognition system uses pattern matching to compare two iris images
and generate a match score that reflects their degree of similarity or dissimilarity.
2.2 Anatomy of the Human Iris
The iris is a thin circular diaphragm, which lies between the cornea and the lens of
the human eye.The iris is perforated close to its center by a circular aperture known as the pupil.
The function of the iris is to control the amount of light entering through the pupil, and this is done
by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter
of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter .
The iris consists of a number of layers; the lowest is the epithelium layer, which
contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains
blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation
determines the colour of the iris. The externally visible surface of the multi-layered iris contains
two zones, which often differ in colour . An outer ciliary zone and an inner pupillary zone, and
these two zones are divided by the collarette – which appears as a zigzag pattern.
Formation of the iris begins during the third month of embryonic life .
The unique pattern on the surface of the iris is formed during the first year of
life, and pigmentation of the stroma takes place for the first few years. Formation of the unique
patterns of the iris is random and not related to any genetic factors . The only characteristic that is
dependent on genetics is the pigmentation of the iris, which determines its colour. Due to the
epigenetic nature of iris patterns, the two eyes of an individual contain completely independent
iris patterns, and identical twins possess uncorrelated iris patterns.
2.3 STAGES INVOLVED IN IRIS DETECTION
IMAGE ACQUISITION
One of the major challenges of automated iris recognition is to capture a high-quality image of the
iris while remaining non invasive to the human operator.
Concerns on the image acquisition rigs
Obtained images with sufficient resolution and sharpness
Good contrast in the interior iris pattern with proper illumination
Well centered without unduly constraining the operator
Artifacts eliminated as much as possible
SEGMENTATION
The first stage of iris recognition is to isolate the actual iris region in a digital eye image. The iris
region can be approximated by two circles, one for the iris/sclera boundary and another, interior
to the first, for the iris/pupil boundary. The eyelids and eyelashes normally occlude the upper and
lower parts of the iris region. Also, specular reflections can occur within the iris region corrupting
the iris pattern. A technique is required to isolate and exclude these artifacts as well as locating
the circular iris region.
Hough Transform:
The Hough transform is a standard computer vision algorithm that can be used to determine the
parameters of simple geometric objects, such as lines and circles, present in an image. The circular
Hough transform can be employed to deduce the radius and centre coordinates of the pupil and
iris regions. Firstly, an edge map is generated by calculating the first derivatives of intensity values
in an eye image and then thresholding the result. From the edge map, votes are cast in Hough
space for the parameters of circles passing through each edge point. These parameters are the
centre coordinates xc and yc, and the radius r, which are able to define any
Taking only the vertical gradients for locating the iris boundary will reduce influence of the eyelids
when performing circular Hough transform, and not all of the edge pixels defining the circle are
required for successful localization. Not only does this make circle localization more accurate, it
also makes it more efficient, since there are less edge points to cast votes in the Hough space.
a) an eye image b) corresponding edge map c) edge map with only horizontal gradients d) edge map with
only vertical gradients.
Daugman makes use of an integro-differential operator for locating the circular iris and pupil
regions, and also the arcs of the upper and lower eyelids. The integro-differential operator is
defined as
where I(x,y) is the eye image, r is the radius to search for, Gσ(r) is a Gaussian smoothing function,
and s is the contour of the circle given by r, x0, y0. The operator searches for the circular path
where there is maximum change in pixel values, by varying the radius and centre x and y position
of the circular contour. The operator is applied iteratively with the amount of smoothing
progressively reduced in order to attain precise localization.
The operator serves to find both the pupillary boundary and the outer (limbus) boundary of the
iris, although the initial search for the limbus also incorporates evidence of an interior pupil to
improve its robustness since the limbic boundary itself usually has extremely soft contrast when
long wavelength NIR illumination is used. Once the coarse-to-fine iterative searches for both these
boundaries have reached single-pixel precision, then a similar approach to detecting curvilinear
edges is used to localize both the upper and lower eyelid boundaries.
The path of contour integration is changed from circular to arcuate, with spline parameters fitted
by statistical estimation methods to model each eyelid boundary. Images with less than 50% of the
iris visible between the fitted eyelid splines are deemed inadequate, e.g., in blink. The result of all
these localization operations is the isolation of iris tissue from other image regions, by the graphical
overlay on the eye.
Isolation of the iris from the rest of the image. The white graphical overlays signify detected iris
boundaries resulting from the segmentation process.
Figure - Stages of segmentation with original eye image Top right) two circles overlaid for iris and pupil
boundaries, and two lines for top and bottom eyelid Bottom left) horizontal lines are drawn for each eyelid
from the lowest/highest point of the fitted line Bottom right) probable eyelid and specular reflection areas
isolated (black areas).
2.3.2:Image Normalization
Once the iris region is successfully segmented from an eye image, the next stage is to transform
the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional
inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil
dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging
distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The
normalization process will produce iris regions, which have the same constant dimensions, so that
two photographs of the same iris under different conditions will have characteristic features at the
same spatial location.
Where I(x,y) are original iris region Cartesian coordinates (xp (θ),yp(θ)) are coordinates of pupil,
(xs (θ),ys (θ)) are the coordinates of iris boundary along the θ direction and determined by:
x(r, θ) = (1-r)xp (θ) + rxs (θ)
y(r, θ) = (1-r)yp (θ) + rys (θ)
The iris region is modelled as a flexible rubber sheet anchored at the iris boundary with the pupil
centre as the reference point.
Illustration of the normalization process for two images of the same iris taken under varying conditions
Normalization of two eye images of the same iris is shown in Figure . The pupil is smaller in the
bottom image, however the normalization process is able to rescale the iris region so that it has
constant dimension.
2.3.3 Feature Encoding
In order to provide accurate recognition of individuals, the most discriminating information present
in an iris pattern must be extracted. Only the significant features of the iris must be encoded so
that comparisons between templates can be made. Most iris recognition systems make use of a
band pass decomposition of the iris image to create a biometric template. The template that is
generated in the feature encoding process will also need a corresponding matching metric, which
gives a measure of similarity between two iris templates.
Each isolated iris pattern is then demodulated to extract its phase information using quadrature
2D Gabor wavelets. This encoding process amounts to a patch-wise phase quantization of the iris
pattern, by identifying in which quadrant of the complex plane each resultant phasor lies when a
given area of the iris is projected onto complex-valued 2DGabor wavelets:
where h{Re;Im} can be regarded as a complex-valued bit whose real and imaginary parts are either
1 or 0 depending on the sign of the 2D integral; is the raw iris image in a dimensionless
polar coordinate system that is size- and translation-invariant, and which also corrects for pupil
dilation as explained in a later section; α and β are the multi-scale 2D wavelet size parameters,
spanning an 8-fold range from 0.15 to 1:2 mm on the iris; Ѡ is wavelet frequency, spanning three
octaves in inverse proportion to β, and represent the polar coordinates of each region of
iris for which the phasor coordinates h{Re; Im} are computed.
Only phase information is used for recognizing irises because amplitude information is not very
discriminating, and it depends upon extraneous factors such as imaging contrast, illumination, and
camera gain. The phase bit settings which code the sequence of projection quadrants as shown in
Fig. The extraction of phase has the further advantage that phase angles are assigned regardless
of how poor the image contrast may be.
The phase demodulation process used to encode iris patterns. Local regions of an iris are projected
onto quadrature 2D Gabor wavelets, generating complex-valued projection coefficients whose real
and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each
phasor is quantized to one of the four quadrants, setting two bits of phase information. This
process is repeated all across the iris with many wavelet sizes, frequencies, and orientations, to
extract 2048 bits.
Matching
For matching , a test of statistical independence is required which helps to compare the phase
codes for 2 different eyes. The test of statistical independence is implemented by the simple
Boolean Exclusive OR operator (XOR) applied to 2048 bit phase vectors that encode any 2 iris
templates, masked by both of their corresponding mask bit vectors to prevent non iris artifacts
from influencing iris comparison. The XOR operator detects disagreement between any
corresponding pair of bits, while AND operator ensures that the compared bits are not corrupted
by eyelashes etc. The norms(|| ||) of resultant bit vector and the AND ed mask vector are
computed to determine a fractional Hamming distance.
The Hamming distance was chosen as a metric for recognition, since bit-wise
comparisons were necessary. The Hamming distance algorithm employed also incorporates noise
masking, so that only significant bits are used in calculating the Hamming distance between two
iris templates. Now when taking the Hamming distance, only those bits in the iris pattern that
correspond to ‘0’ bits in noise masks of both iris patterns will be used in the calculation .In order to
account for rotational inconsistencies, when the Hamming distance of two templates is calculated,
one template is shifted left and right bit-wise and a number of Hamming distance values are
calculated from successive shifts. This bit-wise shifting in the horizontal direction corresponds to
rotation of the original iris region by an angle given by the angular resolution used.
If an angular resolution of 180 is used, each shift will correspond to a rotation of 2 degrees
in the iris region. This method is suggested by Daugman , and corrects for misalignments in the
normalized iris pattern caused by rotational differences during imaging. From the calculated
Hamming distance values, only the lowest is taken, since this corresponds to the best match
between two templates. The number of bits moved during each shift is given by two times the
number of filters used, since each filter will generate two bits of information from one pixel of the
normalized region. The actual number of shifts required to normalize rotational inconsistencies
will be determined by the maximum angle difference between two images of the same eye, and
one shift is defined as one shift to the left, followed by one shift to the right. The shifting process
for one shift is illustrated in Figure below.
Fig:--An illustration of the shifting process. One shift is defined as one shift left, and one shift right of a
reference template. In this example one filter is used to encode the templates, so only two bits are moved
during a shift. The lowest Hamming distance, in this case zero, is then used since this corresponds to the
best match between the two templates.
false accept rate or false match rate (FAR or FMR): the probability that the system incorrectly
matches the input pattern to a non-matching template in the database. It measures the percent
of invalid inputs which are incorrectly accepted.
false reject rate or false non-match rate (FRR or FNMR): the probability that the system fails
to detect a match between the input pattern and a matching template in the database. It
measures the percent of valid inputs which are incorrectly rejected.
equal error rate or crossover error rate (EER or CER): the rate at which both accept and reject
errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a
quick way to compare the accuracy of devices with different ROC curves. In general, the device
with the lowest EER is most accurate.
failure to enroll rate (FTE or FER): the rate at which attempts to create a template from
an input is unsuccessful. This is most commonly caused by low quality inputs.
failure to capture rate (FTC): Within automatic systems, the probability that the system
fails to detect a biometric input when presented correctly.
2.5 Decision Envirnoment
A critical feature of this coding approach is the achievement of commensurability among iris
codes, by mapping all irises into a representation having universal format and constant length,
regardless of the apparent amount of iris detail. In the absence of commensurability among the
codes, one would be faced with the inevitable problem of comparing long codes with short codes,
showing partial agreement and partial disagreement in their lists of features.
Advantages
It is an internal organ that is well protected against damage by a highly transparent and
sensitive membrane. This feature makes it advantageous from finger print.
Flat , geometrical configuration controlled by 2 complementary muscles control the
diameter of the pupil makes the iris shape more predictable .
An iris scan is similar to taking a photograph and can be performed from about 10 cm to a
few meters away.
Encoding and decision-making are tractable .
Genetic independence no two eyes are the same.
DISADVANTAGES
The accuracy of iris scanners can be affected by changes in lightning.
Obscured by eyelashes, lenses, reflections.
Deforms non-elastically as pupil changes size.
Iris scanners are significantly more expensive than some other form of biometrics.
As with other photographic biometric technologies, iris recognition is susceptible to poor image
quality, with associated failure to enroll rates
As with other identification infrastructure (national residents databases, ID cards, etc.), civil rights
activists have voiced concerns that iris-recognition technology might help governments to track
individuals beyond their will.
APLLICATIONS
Iris-based identification and verification technology has gained acceptance in a number of different
areas. Application of iris recognition technology can he limited only by imagination. The
important applications are those following:--
Used in ATM’s for more secure transaction. Used
Credit-card authentication
CONCLUSIONS
There are many mature biometric systems available now. Proper design and implementation of the
biometric system can indeed increase the overall security. There are numerous conditions that
must be taken in account when designing a secure biometric system. First, it is necessary to realize
that biometrics is not secrets. This implies that care should be taken and it is not secure to generate
any cryptographic keys from them. Second, it is necessary to trust the input device and make the
communication link secure. Third, the input device needs to be verified .
Iridian process is defined for rapid exhaustive search for very large databases: distinctive capability
required for authentication today. The extremely low probabilities of getting a false match enable
the iris recognition algorithms to search through extremely large databases, even of a national or
planetary scale. As iris technology superiority has already allowed it to make significant inroads
into identification and security venues which had been dominated by other biometrics. Iris-based
biometric technology has always been an exceptionally accurate one, and it may soon grow much
more prominent.
REFERENCES
1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical
independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no.
11, pp. 1148–1161, 1993.
2. J. G. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and System for
Video Technology”, vol. 14, no. 1, pp. 21-30, 2004.
3. Amir Azizi and Hamid Reza Pourreza,, “Efficient IRIS Recognition Through Improvement
of Feature Extraction and subset Selection”, (IJCSIS) International Journal of Computer Science
and Information Security, Vol. 2, No.1, June 2009.
4. www.wikipedia.com
6. John Daugman, The Computer Laboratory, University of Cambridge, Cambridge CB3 0FD, UK,”
The importance of being random: statistical principles of iris recognition”.
8. www.scribd.com/doc/50033821
9. Somnath Dey and Debasis Samanta,” Improved Feature Processing for Iris Biometric
Authentication System”, International Journal of Electrical and Electronics Engineering 4:2 2010