Biometric Authentication

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

A Seminar Report ON

BIOMETRIC AUTHENTICATION
Submitted in partial fulfilment
for the award of the degree of
Bachelor of Technology
In
Engineering

SANJEEVAN COLLEGE OF
ENGINEERING AND TECHNOLOGY
INSTITUTE, PPANHALA.
Academic Year (2023-2024)

SUBMITTED TO: Ni.G.Khan Mam SUBMITTED BY: BHETE OMKAR HANMANTRAO

Branch: Computer science and engineering.

Semester: Second.

PRN No:23063151242070.
Roll No: 108
CERTIFICATE

This is to certify that the Seminar Report entitled BIOMETRIC AUTHENTICATION


has been submitted by BHETE OMKAR HANMANTRAO in partial in fulfillment
for the requirement of the degree of B.Tech in Engineering for the academic Year
2023– 2024.

He/ She has been undergone the requisite work as prescribed by Sanjeevan
Engineering & Technology Institute, Panhala.

(Prof. Sudhir P. Nangare.) (Dr. Sanjeev Jain.)

HOD B.S.H Principal

Place:-
Date:-
AKNOWLEDGEMENT

This is opportunity to express my heartfelt words for the people who were part of
this seminar in numerous ways, people who gave me un ending support right from
beginning of the seminar.
I want to give sincere thanks to the principal Dr. Sanjeev Jain for his valuable
support.
I extend my thanks to Prof .Sudhir P. Nangare Head of the Department for his
Constant support.
I express my deep sense of gratitude to vice Principal Dr .S.G Sapate for
continuous cooperation encouragement and esteemed guidance by Prof.
Ni.G.Khan

Yours Sincerely,
BHETE OMKAR HANMANTRAO

PRN No: 23063151242070


ABSTRACT
Humans recognize each other according to their various characteristics for ages. We
recognize others by their face when we meet them and by their voice as we speak to them. Identity
verification (authentication) in computer systems has been traditionally based on something that
one has (key, magnetic or chip card) or one knows (PIN, password). Things like keys or cards,
however, tend to get stolen or lost and passwords are often forgotten or disclosed.

To achieve more reliable verification or identification we should use something that really
characterizes the given person. Biometrics offer automated methods of identity verification or
identification on the principle of measurable physiological or behavioral characteristics such as a
fingerprint or a voice sample. The characteristics are measurable and unique. These characteristics
should not be duplicable, but it is unfortunately often possible to create a copy that is accepted by

the biometric system as a true sample.

In biometric-based authentication, a legitimate user does not need to remember or


carry anything and it is known to be more reliable than traditional authentication schemes.
However, the security of biometric systems can be undermined in a number of
ways. For instance, a biometric template can be replaced by an impostor's template in a system
database or it might be stolen and replayed . Consequently, the impostor could gain unauthorized
access to a place or a system. Moreover, it has been shown that it is possible to create a physical
spoof starting from standard biometric templates. Hence, securing the biometric templates is vital
to maintain security and integrity of biometric systems.

This report actually gives an overview of what is biometric system and a detail overview

of a particular system i.e iris recognition system.


Table of contents
1. INTRODUCTION

1.1 HISTORY AND DEVELOPMENT OF BIOMETRICS

1.2 BASIC STRUCTURE OF A BIOMETRIC SYSTEM

1.3 CLASSIFICATION OF BIOMETRICS

1.4 TYPES OF BIOMETRICS

2. IRIS RECOGNITION

2.1 INTRODUCTION

2.2 ANATOMY OF THE HUMAN IRIS

2.3 STAGES INVOLVED IN IRIS DETECTION

2.3.1 IMAGE ACQUISITION ANS SEGMENTATION

2.3.2 NORMALIZATION

2.3.3 FEATURE ENCODING AND MATCHING

2.4 BIOMETRIC SYSTEM PERFORMANCES

2.5 DECISION ENVIRONMENT

3. ADVANTAGES AND DISADVANTAGES

4. CONCLUSION

1 INTRODUCTION
Biometrics are automated methods of identifying a person or verifying the identity of a person
based on a physiological or behavioral characteristic. Biometric-based authentication is the
automatic identity verification, based on individual physiological or behavioral characteristics, such
as fingerprints, voice, face and iris. Since biometrics is extremely difficult to forge and cannot be
forgotten or stolen, Biometric authentication offers a convenient, accurate, irreplaceable and high
secure alternative for an individual, which makes it has advantages over traditional cryptography-
based authentication schemes. It has become a hot interdisciplinary topic involving biometric and
Cryptography. Biometric data is personal privacy information, which uniquely and permanently
associated with a person and cannot be replaced like passwords or keys. Once an adversary
compromises the biometric data of a user, the data is lost forever, which may lead to a huge
financial loss. Hence, one major concern is how a person’s biometric data, once collected, can be
protected.

1.1 HISTORY AND DEVELOPMENT OF BIOMETRICS


The idea of using patterns for personal identification was originally proposed in 1936 by
ophthalmologist Frank Burch. By the 1980’s the idea had appeared in James Bond films, but it still
remained science fiction and conjecture. In 1987, two other ophthalmologists Aram Safir and
Leonard Flom patented this idea and in 1987 they asked John Daugman to try to create actual
algorithms for this iris recognition. These algorithms which Daugman patented in 1994 are the
basis for all current iris recognition systems and products.

Daugman algorithms are owned by Iridian technologies and the process is licensed to several
other Companies who serve as System integrators and developers of special platforms exploiting
iris recognition in recent years several products have been developed for acquiring its images over
a range of distances and in a variety of applications. One active imaging system developed in 1996
by licensee Sensar deployed special cameras in bank ATM to capture IRIS images at a distance of
up to 1 meter. This active imaging system was installed in cash machines both by NCR Corps and
by Diebold Corp in successful public trials in several countries during I997 to 1999. a new and
smaller imaging device is the low cost “Panasonic Authenticam” digital camera for handheld,
desktop, e-commerce and other information security applications. Ticket less air travel, check-in
and security procedures based on iris recognition kiosks in airports have been developed by eye
ticket. Companies in several, countries are now using Daughman’s algorithms in a variety of
products.

1.2 BASIC STRUCTURE OF A BIOMETRIC SYSTEM

Biometric authentication requires comparing a registered or enrolled biometric sample (biometric


template or identifier) against a newly captured biometric sample (for example, a fingerprint
captured during a login).
During Enrollment, a sample of the biometric trait is captured, processed by a computer, and stored
for later comparison.
Biometric recognition can be used in Identification mode, where the biometric system identifies a
person from the entire enrolled population by searching a database for a match based solely on the
biometric. For example, an entire database can be searched to verify a person has not applied for
entitlement benefits under two different names. This is sometimes called “one-to-many” matching.
A system can also be used in Verification mode, where the biometric system authenticates a
person’s claimed identity from their previously enrolled pattern. This is also called “one-to-one”
matching. In most computer access or network access environments, verification mode would be
used. A user enters an account, user name, or inserts a token such as a smart card, but instead of
entering a password, a simple touch with a finger or a glance at a camera is enough to authenticate
the user.
1.3 Classification of Biometrics

Biometrics encompasses both physiological and behavioral characteristics. A physiological


characteristic is a relatively stable physical feature such as finger print, iris pattern, retina pattern
or a Facial feature. A behavioral trait in identification is a person’s signature, keyboard typing
pattern or a speech pattern. The degree of interpersonal variation is smaller in a physical
characteristic than in a behavioral one.

1.4 TYPES OF BIOMETRICS


Fingerprints: The patterns of friction ridges and valleys on an individual's fingertips are unique to
that individual. For decades, law enforcement has been classifying and determining identity by
matching key points of ridge endings and bifurcations. Fingerprints are unique for each finger of a
person including identical twins. One of the most commercially available biometric technologies,
fingerprint recognition devices for desktop and laptop access are now widely available from many
different vendors at a low cost. With these devices, users no longer need to type passwords –
instead, only a touch provides instant access.

Face Recognition: The identification of a person by their facial image can be done in a number of
different ways such as by capturing an image of the face in the visible spectrum using an
inexpensive camera or by using the infrared patterns of facial heat emission. Facial recognition in
visible light typically model key features from the central portion of a facial image. Using a wide
assortment of cameras, the visible light systems extract features from the captured image(s) that
do not change over time while avoiding superficial features such as facial expressions or hair.

Speaker Recognition:. Speaker recognition uses the acoustic features of speech that have been
found to differ between individuals. These acoustic patterns reflect both anatomy and learned
behavioral patterns . This incorporation of learned patterns into the voice templates has earned
speaker recognition its classification as a "behavioral biometric." Speaker recognition systems
employ three styles of spoken input: text-dependent, text-prompted and text independent. Most
speaker verification applications use text-dependent input, which involves selection and
enrollment of one or more voice passwords. Text-prompted input is used whenever there is
concern of imposters. The various technologies used to process and store voiceprints include
hidden Markov models, pattern matching algorithms, neural networks, matrix representation and
decision trees.

Iris Recognition: This recognition method uses the iris of the eye which is the colored area that
surrounds the pupil. Iris patterns are thought unique. The iris patterns are obtained through a
video-based image acquisition system. Iris scanning devices have been used in personal
authentication applications for several years. Systems based on iris recognition have substantially
decreased in price and this trend is expected to continue. The technology works well in both
verification and identification modes.

Hand and Finger Geometry: To achieve personal authentication, a system may measure either
physical characteristics of the fingers or the hands. These include length, width, thickness and
surface area of the hand. One interesting characteristic is that some systems require a small
biometric sample. It can frequently be found in physical access control in commercial and
residential applications, in time and attendance systems and in general personal authentication
applications.
Signature Verification: This technology uses the dynamic analysis of a signature to authenticate a
person. The technology is based on measuring speed, pressure and angle used by the person when
a signature is produced. One focus for this technology has been e-business applications and other
applications where signature is an accepted method of personal authentication.

2. IRIS RECOGNITION

2.1 INTRODUCTION
Iris recognition systems, in particular, are gaining interest because the iris’s rich texture offers a
strong biometric clue for recognizing individuals. Located just behind the cornea and in front of
the lens, the iris uses the dilator and sphincter muscles that govern pupil size to control the amount
of light that enters the eye. Near-infrared (NIR) images of the iris’s anterior surface exhibit complex
patterns that computer systems can use to recognize individuals. Because NIR lighting can
penetrate the iris’s surface, it can reveal the intricate texture details that are present even in dark-
colored irises. The iris’s textural complexity and its variation across eyes have led scientists to
postulate that the iris is unique across individuals. Further, the iris is the only internal organ readily
visible from the outside. Thus, unlike fingerprints or palm prints, environmental effects cannot
easily alter its pattern. An iris recognition system uses pattern matching to compare two iris images
and generate a match score that reflects their degree of similarity or dissimilarity.
2.2 Anatomy of the Human Iris

The iris is a thin circular diaphragm, which lies between the cornea and the lens of
the human eye.The iris is perforated close to its center by a circular aperture known as the pupil.
The function of the iris is to control the amount of light entering through the pupil, and this is done
by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter
of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter .

The iris consists of a number of layers; the lowest is the epithelium layer, which
contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains
blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation
determines the colour of the iris. The externally visible surface of the multi-layered iris contains
two zones, which often differ in colour . An outer ciliary zone and an inner pupillary zone, and
these two zones are divided by the collarette – which appears as a zigzag pattern.
Formation of the iris begins during the third month of embryonic life .

The unique pattern on the surface of the iris is formed during the first year of
life, and pigmentation of the stroma takes place for the first few years. Formation of the unique
patterns of the iris is random and not related to any genetic factors . The only characteristic that is
dependent on genetics is the pigmentation of the iris, which determines its colour. Due to the
epigenetic nature of iris patterns, the two eyes of an individual contain completely independent
iris patterns, and identical twins possess uncorrelated iris patterns.
2.3 STAGES INVOLVED IN IRIS DETECTION

It includes Three Main Stages

2.3.1) Image Acquisition and Segmentation

2.3.2) Image Normalization

2.3.3)Feature Coding and Matching

2.3.1IMAGE ACQUISITION AND SEGMENTATION

IMAGE ACQUISITION

One of the major challenges of automated iris recognition is to capture a high-quality image of the
iris while remaining non invasive to the human operator.
Concerns on the image acquisition rigs
Obtained images with sufficient resolution and sharpness
Good contrast in the interior iris pattern with proper illumination
Well centered without unduly constraining the operator
Artifacts eliminated as much as possible

SEGMENTATION

The first stage of iris recognition is to isolate the actual iris region in a digital eye image. The iris
region can be approximated by two circles, one for the iris/sclera boundary and another, interior
to the first, for the iris/pupil boundary. The eyelids and eyelashes normally occlude the upper and
lower parts of the iris region. Also, specular reflections can occur within the iris region corrupting
the iris pattern. A technique is required to isolate and exclude these artifacts as well as locating
the circular iris region.

This can be done by using the following techniques:-


• Hough Transform
• Daugman Integro- Differential operator

Hough Transform:

The Hough transform is a standard computer vision algorithm that can be used to determine the
parameters of simple geometric objects, such as lines and circles, present in an image. The circular
Hough transform can be employed to deduce the radius and centre coordinates of the pupil and
iris regions. Firstly, an edge map is generated by calculating the first derivatives of intensity values
in an eye image and then thresholding the result. From the edge map, votes are cast in Hough
space for the parameters of circles passing through each edge point. These parameters are the
centre coordinates xc and yc, and the radius r, which are able to define any

circle according to the equation


A maximum point in the Hough space will correspond to the radius and centre coordinates of the
circle best defined by the edge points. Wildes et al. make use of the parabolic Hough transform to
detect the eyelids, approximating the upper and lower eyelids with parabolic arcs, which are
represented as;
In performing the preceding edge detection step, Wildes et al. bias the derivatives in the horizontal
direction for detecting the eyelids, and in the vertical direction for detecting the outer circular
boundary of the iris, this is illustrated in Figure shown below. The motivation for this is that the
eyelids are usually horizontally aligned, and also the eyelid edge map will corrupt the circular iris
boundary edge map if using all gradient data.

Taking only the vertical gradients for locating the iris boundary will reduce influence of the eyelids
when performing circular Hough transform, and not all of the edge pixels defining the circle are
required for successful localization. Not only does this make circle localization more accurate, it
also makes it more efficient, since there are less edge points to cast votes in the Hough space.

a) an eye image b) corresponding edge map c) edge map with only horizontal gradients d) edge map with
only vertical gradients.

Daugman’s Integro-differential Operator

Daugman makes use of an integro-differential operator for locating the circular iris and pupil
regions, and also the arcs of the upper and lower eyelids. The integro-differential operator is
defined as
where I(x,y) is the eye image, r is the radius to search for, Gσ(r) is a Gaussian smoothing function,
and s is the contour of the circle given by r, x0, y0. The operator searches for the circular path
where there is maximum change in pixel values, by varying the radius and centre x and y position
of the circular contour. The operator is applied iteratively with the amount of smoothing
progressively reduced in order to attain precise localization.
The operator serves to find both the pupillary boundary and the outer (limbus) boundary of the
iris, although the initial search for the limbus also incorporates evidence of an interior pupil to
improve its robustness since the limbic boundary itself usually has extremely soft contrast when
long wavelength NIR illumination is used. Once the coarse-to-fine iterative searches for both these
boundaries have reached single-pixel precision, then a similar approach to detecting curvilinear
edges is used to localize both the upper and lower eyelid boundaries.
The path of contour integration is changed from circular to arcuate, with spline parameters fitted
by statistical estimation methods to model each eyelid boundary. Images with less than 50% of the
iris visible between the fitted eyelid splines are deemed inadequate, e.g., in blink. The result of all
these localization operations is the isolation of iris tissue from other image regions, by the graphical
overlay on the eye.

Isolation of the iris from the rest of the image. The white graphical overlays signify detected iris
boundaries resulting from the segmentation process.
Figure - Stages of segmentation with original eye image Top right) two circles overlaid for iris and pupil
boundaries, and two lines for top and bottom eyelid Bottom left) horizontal lines are drawn for each eyelid
from the lowest/highest point of the fitted line Bottom right) probable eyelid and specular reflection areas
isolated (black areas).

2.3.2:Image Normalization

Once the iris region is successfully segmented from an eye image, the next stage is to transform
the iris region so that it has fixed dimensions in order to allow comparisons. The dimensional
inconsistencies between eye images are mainly due to the stretching of the iris caused by pupil
dilation from varying levels of illumination. Other sources of inconsistency include, varying imaging
distance, rotation of the camera, head tilt, and rotation of the eye within the eye socket. The
normalization process will produce iris regions, which have the same constant dimensions, so that
two photographs of the same iris under different conditions will have characteristic features at the
same spatial location.

This is done by following Technique

Daugman’s Rubber Sheet Model


The homogenous rubber sheet model assign, to each point in the iris , regardless of iris size, a pair
of dimensionless real coordinates(r, θ) where r lies in the unit interval (0,1) & θ is the angle
(0,2π).The remapping or normalization of the iris image I(x,y) from raw coordinates (x,y) to a
doubly dimensionless and non concentric coordinate system (r, θ) can be represented as :-

I(x(r, θ),y(r, θ)) I(r, θ)

Where I(x,y) are original iris region Cartesian coordinates (xp (θ),yp(θ)) are coordinates of pupil,
(xs (θ),ys (θ)) are the coordinates of iris boundary along the θ direction and determined by:
x(r, θ) = (1-r)xp (θ) + rxs (θ)
y(r, θ) = (1-r)yp (θ) + rys (θ)

The iris region is modelled as a flexible rubber sheet anchored at the iris boundary with the pupil
centre as the reference point.

Illustration of the normalization process for two images of the same iris taken under varying conditions

Normalization of two eye images of the same iris is shown in Figure . The pupil is smaller in the
bottom image, however the normalization process is able to rescale the iris region so that it has
constant dimension.
2.3.3 Feature Encoding

In order to provide accurate recognition of individuals, the most discriminating information present
in an iris pattern must be extracted. Only the significant features of the iris must be encoded so
that comparisons between templates can be made. Most iris recognition systems make use of a
band pass decomposition of the iris image to create a biometric template. The template that is
generated in the feature encoding process will also need a corresponding matching metric, which
gives a measure of similarity between two iris templates.

Each isolated iris pattern is then demodulated to extract its phase information using quadrature
2D Gabor wavelets. This encoding process amounts to a patch-wise phase quantization of the iris
pattern, by identifying in which quadrant of the complex plane each resultant phasor lies when a
given area of the iris is projected onto complex-valued 2DGabor wavelets:

where h{Re;Im} can be regarded as a complex-valued bit whose real and imaginary parts are either
1 or 0 depending on the sign of the 2D integral; is the raw iris image in a dimensionless
polar coordinate system that is size- and translation-invariant, and which also corrects for pupil
dilation as explained in a later section; α and β are the multi-scale 2D wavelet size parameters,
spanning an 8-fold range from 0.15 to 1:2 mm on the iris; Ѡ is wavelet frequency, spanning three
octaves in inverse proportion to β, and represent the polar coordinates of each region of
iris for which the phasor coordinates h{Re; Im} are computed.
Only phase information is used for recognizing irises because amplitude information is not very
discriminating, and it depends upon extraneous factors such as imaging contrast, illumination, and
camera gain. The phase bit settings which code the sequence of projection quadrants as shown in
Fig. The extraction of phase has the further advantage that phase angles are assigned regardless
of how poor the image contrast may be.
The phase demodulation process used to encode iris patterns. Local regions of an iris are projected
onto quadrature 2D Gabor wavelets, generating complex-valued projection coefficients whose real
and imaginary parts specify the coordinates of a phasor in the complex plane. The angle of each
phasor is quantized to one of the four quadrants, setting two bits of phase information. This
process is repeated all across the iris with many wavelet sizes, frequencies, and orientations, to
extract 2048 bits.

Matching
For matching , a test of statistical independence is required which helps to compare the phase
codes for 2 different eyes. The test of statistical independence is implemented by the simple
Boolean Exclusive OR operator (XOR) applied to 2048 bit phase vectors that encode any 2 iris
templates, masked by both of their corresponding mask bit vectors to prevent non iris artifacts
from influencing iris comparison. The XOR operator detects disagreement between any
corresponding pair of bits, while AND operator ensures that the compared bits are not corrupted
by eyelashes etc. The norms(|| ||) of resultant bit vector and the AND ed mask vector are
computed to determine a fractional Hamming distance.

Hamming distance is the measure of dissimilarity between any 2 irises.

HD= ||(code A XOR code B) AND (mask A AND mask B)||


||( mask A AND mask B)||
Where {code A, code B} are phase code vectors bit And {mask A ,mask B} are mask bit vectors.
We can see that the numerator will be the number of differences between the mutually nonbad
bits of code A and code B and that the denominator will be the number of mutually non-bad bits.
If HD result is 0 it is a perfect match.

The Hamming distance was chosen as a metric for recognition, since bit-wise
comparisons were necessary. The Hamming distance algorithm employed also incorporates noise
masking, so that only significant bits are used in calculating the Hamming distance between two
iris templates. Now when taking the Hamming distance, only those bits in the iris pattern that
correspond to ‘0’ bits in noise masks of both iris patterns will be used in the calculation .In order to
account for rotational inconsistencies, when the Hamming distance of two templates is calculated,
one template is shifted left and right bit-wise and a number of Hamming distance values are
calculated from successive shifts. This bit-wise shifting in the horizontal direction corresponds to
rotation of the original iris region by an angle given by the angular resolution used.
If an angular resolution of 180 is used, each shift will correspond to a rotation of 2 degrees
in the iris region. This method is suggested by Daugman , and corrects for misalignments in the
normalized iris pattern caused by rotational differences during imaging. From the calculated
Hamming distance values, only the lowest is taken, since this corresponds to the best match
between two templates. The number of bits moved during each shift is given by two times the
number of filters used, since each filter will generate two bits of information from one pixel of the
normalized region. The actual number of shifts required to normalize rotational inconsistencies
will be determined by the maximum angle difference between two images of the same eye, and
one shift is defined as one shift to the left, followed by one shift to the right. The shifting process
for one shift is illustrated in Figure below.
Fig:--An illustration of the shifting process. One shift is defined as one shift left, and one shift right of a
reference template. In this example one filter is used to encode the templates, so only two bits are moved
during a shift. The lowest Hamming distance, in this case zero, is then used since this corresponds to the
best match between the two templates.

2.4 Biometric System Performance


The following are used as performance metrics for biometric systems: [4]

false accept rate or false match rate (FAR or FMR): the probability that the system incorrectly
matches the input pattern to a non-matching template in the database. It measures the percent
of invalid inputs which are incorrectly accepted.
false reject rate or false non-match rate (FRR or FNMR): the probability that the system fails
to detect a match between the input pattern and a matching template in the database. It
measures the percent of valid inputs which are incorrectly rejected.
equal error rate or crossover error rate (EER or CER): the rate at which both accept and reject
errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a
quick way to compare the accuracy of devices with different ROC curves. In general, the device
with the lowest EER is most accurate.
failure to enroll rate (FTE or FER): the rate at which attempts to create a template from
an input is unsuccessful. This is most commonly caused by low quality inputs.
failure to capture rate (FTC): Within automatic systems, the probability that the system
fails to detect a biometric input when presented correctly.
2.5 Decision Envirnoment

The performance of any biometric identification scheme is characterized by its “Decision


Environment’. This is a graph superimposing the two fundamental histograms of similarity that the
test generates: one when comparing biometric measurements from the SAME person (different
times, environments, or conditions), and the other when comparing measurements from
DIFFERENT persons. When the biometric template of a presenting person is compared to a
previously enrolled database of templates to determine the Persons identity, a criterion threshold
(which may be adaptive) is applied to each similarly score. Because this determines whether any
two templates are deemed to be “same” or “different”, the two fundamental distributions should
ideally be well separated as any overlap between them causes decision errors.

3. ADVANTAGES AND DISADVANTAGES

A critical feature of this coding approach is the achievement of commensurability among iris
codes, by mapping all irises into a representation having universal format and constant length,
regardless of the apparent amount of iris detail. In the absence of commensurability among the
codes, one would be faced with the inevitable problem of comparing long codes with short codes,
showing partial agreement and partial disagreement in their lists of features.
Advantages

It is an internal organ that is well protected against damage by a highly transparent and
sensitive membrane. This feature makes it advantageous from finger print.
Flat , geometrical configuration controlled by 2 complementary muscles control the
diameter of the pupil makes the iris shape more predictable .
An iris scan is similar to taking a photograph and can be performed from about 10 cm to a
few meters away.
Encoding and decision-making are tractable .
Genetic independence no two eyes are the same.

DISADVANTAGES
The accuracy of iris scanners can be affected by changes in lightning.
Obscured by eyelashes, lenses, reflections.
Deforms non-elastically as pupil changes size.
Iris scanners are significantly more expensive than some other form of biometrics.
As with other photographic biometric technologies, iris recognition is susceptible to poor image
quality, with associated failure to enroll rates
As with other identification infrastructure (national residents databases, ID cards, etc.), civil rights
activists have voiced concerns that iris-recognition technology might help governments to track
individuals beyond their will.

APLLICATIONS
Iris-based identification and verification technology has gained acceptance in a number of different
areas. Application of iris recognition technology can he limited only by imagination. The
important applications are those following:--
Used in ATM’s for more secure transaction. Used

in airports for security purposes.

Computer login: The iris as a living password

Credit-card authentication

Secure financial transaction (e-commerce, banking).

“Biometric—key Cryptography “for encrypting/decrypting messages.

Driving licenses and other personal certificates.

Entitlements and benefits authentication.

Forensics, birth certificates, tracking missing or wanted person

CONCLUSIONS
There are many mature biometric systems available now. Proper design and implementation of the
biometric system can indeed increase the overall security. There are numerous conditions that
must be taken in account when designing a secure biometric system. First, it is necessary to realize
that biometrics is not secrets. This implies that care should be taken and it is not secure to generate
any cryptographic keys from them. Second, it is necessary to trust the input device and make the
communication link secure. Third, the input device needs to be verified .
Iridian process is defined for rapid exhaustive search for very large databases: distinctive capability
required for authentication today. The extremely low probabilities of getting a false match enable
the iris recognition algorithms to search through extremely large databases, even of a national or
planetary scale. As iris technology superiority has already allowed it to make significant inroads
into identification and security venues which had been dominated by other biometrics. Iris-based
biometric technology has always been an exceptionally accurate one, and it may soon grow much
more prominent.

REFERENCES
1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical
independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no.
11, pp. 1148–1161, 1993.

2. J. G. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and System for
Video Technology”, vol. 14, no. 1, pp. 21-30, 2004.

3. Amir Azizi and Hamid Reza Pourreza,, “Efficient IRIS Recognition Through Improvement
of Feature Extraction and subset Selection”, (IJCSIS) International Journal of Computer Science
and Information Security, Vol. 2, No.1, June 2009.

4. www.wikipedia.com

5. Parvathi Ambalakat,” Security of Biometric Authentication Systems”.

6. John Daugman, The Computer Laboratory, University of Cambridge, Cambridge CB3 0FD, UK,”
The importance of being random: statistical principles of iris recognition”.

7. Li Huixian, Pang Liaojun,” A Novel Biometric-based Authentication Scheme


with Privacy Protection”, 2009 Fifth International Conference on Information Assurance and
Security.

8. www.scribd.com/doc/50033821

9. Somnath Dey and Debasis Samanta,” Improved Feature Processing for Iris Biometric
Authentication System”, International Journal of Electrical and Electronics Engineering 4:2 2010

You might also like