Combining Bayesian Conditioning with
Distributed Neural Networks
Vallesi Germano
Dragoni Aldo Franco
Montesanto Anna
Univ. Politecnica delle Marche
Via Brecce Bianche 1
Cap: 60131 Ancona
0390712204390
Univ. Politecnica delle Marche
Via Brecce Bianche 1
Cap: 60131 Ancona
0390712204390
Univ. Politecnica delle Marche
Via Brecce Bianche 1
Cap: 60131 Ancona
0390712204449
g.vallesi@univpm.it
a.f.dragoni@univpm.it
a.montesanto@univpm.it
ABSTRACT
There are many methods to perform iris biometric identification
systems, but all of them have a problem: the presence of noises in
the image of the eye (eyelid, eyelashes, etc…). To remove it many
authors apply appropriate preprocessing to the image, but
unfortunately this yields losses of information. Our work aims at
correctly recognizing the subject also in presence of high rates of
noise. The basic idea is that of partitioning the image of iris into 8
not-interleaved segments of the same size. Each segment is given
to a Neural Network (LVQ network) which generates prototypes
with a high resistance to noise. Notwithstanding this, the 8 LVQ
nets may still disagree in identifying the subject. In this paper we
apply a method developed by the “belief revision” community to
identify conflicts and rearrange the degrees of reliability of each
expert (the LVQ nets) through a Bayesian algorithm. This
estimated ranking of reliability is useful to take the final decision.
Our work has produced an interesting 91.67 % of positive
identification on Test set.
Categories and Subject Descriptors
I.2.6 [Artificial Intelligence]: Learning – connectionism and
neural nets.
General Terms
Reliability.
Keywords
Biometry Identification, Iris Detection, Hybrid Systems, Artificial
Neural Networks, LVQ, Bayesian Conditioning.
1.INTRODUCTION
Biometrics automated personal identification has recently
received considerable attention with increasing emphasis on
access control. Among biometric technologies, iris recognition is
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
Conference’04, Month 1–2, 2004, City, State, Country.
Copyright 2004 ACM 1-58113-000-0/00/0004…$5.00.
distinguish for its high reliability and is currently a subject great
interest in academia and industry [1], [ 2].
Irises are particularly advantageous for use in biometric
recognition systems since they have been shown to be especially
stable throughout a person’s life. Patterns in human iris remain
constant from the age of one year. The procedure is very quick
and non-invasive, requiring only that a photograph be taken. Iris
and retina have been shown to have higher degrees of
distinctiveness than hand or finger geometry. The actual pattern
within the iris is not determined by genetics and is so random that
an individual’s left and right irises are as different from each other
as from the irises of an other person. Even monozygotic twins
have completely different iris patterns. One iris contains more
data than in a person’s finger, face and a hand combined. This
data-richness means that it is possible to gain an accurate pattern
match even if the eye is partially obscured by eyelashes or eyelids.
Much work has been done on coding the human iris. According to
the various iris features utilized, these works can be grouped into
three main categories: Zero-crossing representation [3]; Phasebased method [4], [5]; Texture analysis [2], [6]. In general, a
typical iris recognition system involves four steps: iris imaging,
iris detection, iris image quality assessment and iris recognition.
Our paper wants to work into the last two steps, working directly
on the image of the iris with a pattern recognition technique
obtained from the union of neural networks (LVQ) [7] and
Bayesian conditioning [8], [9], and [10].
The iris database used for the training of the neural networks and
to the test is the CASIA [11] database.
2.CLASSIC APPROACHES TO IRIS
RECOGNITION
The French ophthalmologist Alphonse Bertillon was the first one
who proposed the use of iris pattern as a basis for personal
identification [12], but most works are done in the last decade.
Daugman [1], [4] used multiscale quadrature wavelets to extract
texture phase structure information of the iris to generate a 2048bit iris code and he compared the difference between a pair of iris
representation by computing their Hamming distance. He showed
that for identification, it is enough to have a lower than 0.34
Hamming distance with any of the iris template in database.
Wildes et al. [2], [13] represented the iris pattern using a fourlevel Laplacian pyramid and the quality of matching was
determined by the normalized correlation results between the
acquired iris image and the stored template. Boles and Boashash
[14] used zero-crossing of 1D wavelet at various resolution levels
to distinguish the texture of iris. Sanchez-Reillo and SanchezAvila in [15] provided a partial implementation of the algorithm
by Daugman. Also their other work on developing the method of
Boles and Boashash by using different distance measures (such as
Hamming and Euclidean distances) for matching was reported in
[3]. Lim et al. [6] used 2D Haar wavelet and quantized the 4 thlevel high-frequency information to form an 87-binary code
length as feature vector and applied an LVQ neural network for
classification. Tisse et al. [5] constructed the analytic image (a
combination of the original image and its Hilbert transform) to
demodulate the iris texture. Ma et al. [16], [17] adopted a wellknown texture analysis method to capture both global and local
details in iris. Bae et al. [18] projected the iris signal onto a bank
of basis vectors derived by independent component analysis and
quantized the resulting projection coefficients as features. Nam et
al [19] exploited a scale-space filtering to extract unique features
that use the direction of concavity of an image from an iris image.
Using sharp variations points in iris was represented by Ma et al.
[20]. They constructed one-dimensional intensity signal and used
a particular class of wavelets with vector of position sequence of
local sharp variation point as features.
3.LVQ NEURAL NETWORKS
Learning vector quantization (LVQ) originally was introduced by
Linde et al. [21] and Gray [22] as a tool for image data
compression. Later it was adapted by Kohonen [7] for pattern
recognition, because it is a special case of SOM, where LVQ is a
supervised Neural Network that uses class information to move
the weight vector slightly, so as to improve the quality of the
classifier decision regions. Learning is performed in a supervised,
decision-controlled teaching process. This method is basically a
nearest-neighbour method, as the smallest distance of the
unknown vector from a set of reference vectors is sought.
In our system we used the LVQ1; it consists of two layers, input
and output. The dimension of the input layer is the same of the
input vector (iris images). Each node in the output layer
represents one output class. The activation of each output node
depends on Euclidean distance between the input vector and the
input weight vector.
During the training the weight vector is adjusted according to the
output class and the target class. The target class is the desirable
target. Let wc be the input weight vector of the output class node,
x is the target class. The weights are adjusted according to the
following equations:
wc ( n 1) wc ( n ) ( n ) s ( n )[ x ( n ) wc ( n )]
(1)
0 ( n ) 1,
s(n)
1 if the calssification is correct,
(2)
1 if the classification is wrong,
where ( n ) is the learning rate at the nth epoch. It is desirable
that it decreases monotonically with the number of iterations n .
The input weights of other nodes remain unchanged. Using this
algorithm, the input weight vector will get closer to the input
vector as time progresses.
The LVQ algorithms are sensitive to the starting point, i.e. to the
initial values of prototype vectors, which can affect both the speed
of convergence and the final recognition error. Considering the
high speed of the k-means algorithm, it is recommended that the
k-means algorithm be implemented at the beginning of the LVQ
training algorithm, and that the resultant cluster centres be
assigned as the initial prototype vectors. This initializing approach
could be extremely valuable for databases with a huge number of
samples or with a slow rate of convergence.
The performance of LVQ algorithms depends to the size of
network, database, training algorithm and initial point.
4.BELIEF REVISION MECHANISM
In this scenario we have a collective activity of a set of interacting
agents, LVQ Neural Networks, in which each component
contributes with its local beliefs that is their outputs.
Belief Revision is an emergent discipline from Artificial
Intelligence that studies how the new information changes the
previously held knowledge base. The ability to revise opinions
and beliefs is imperative for intelligent systems exchanging
information in a dynamic word.
Let S = {S1, … , Sn} be the set of the information sources, and T =
{<S1 ; R1>, … , <Sn ; Rn>} the “reliability set”, where Ri (a real in
[0,1]) is the degree of reliability of Si, interpreted as the a-priori
probability that Si is reliable.
After all the sources have given their information, it is possible to
estimate their a-posteriori degree of reliability from the crossexamination of their outputs. Dragoni et al. [8], [9], [10] adopted
the Bayesian conditioning in a decision aid for judicial
proceedings to help assess witness deception.
Since our sources are independent, the probability that only the
sources belonged to a subset S of this hypothesis is:
R ( )
�R ��(1 R )
i
S i �
i
S i �
(3)
this combined reliability can be calculated for any Φ holding that:
¥R () 1
2
S
(4)
If the sources belonging to a certain Φ give incompatible
information then R(Φ) must be set at zero. So what we do is:
•
•
•
finding all the minimal subsets of contradictory sources
finding all their supersets
summing up into RContradictory the reliability of all these
sets
•
putting at zero all their reliabilities
• dividing the reliability of each non-contradictory set of
sources by 1- RContradictory
The last step assures that the constrain (4) is still satisfied and it is
well known as “Bayesian Conditioning”. The new degree of
reliability of each Φ is called “revised reliability” and we label it
NR(Φ). The revised reliability NRi of each source Si is defined as
the sum of the revised reliability of any Φ containing Si. An
important feature of this way to recalculate the sources’ reliability
is that if Si is involved in contradictions, then NRi ≤ Ri, otherwise
NRi = Ri.
5.IMPLEMENTATION
This work is an iris recognition system based on neural networks,
but unlike previous works [6] the input of networks are the
images of irises and not their transformation (2D Haar wavelet
transform, 2D Gabor filter, etc. ...).
We used the CASIA database [11], form which we take randomly
12 subjects (from the 411 of the database) to form the core group
on which to work.
Daugman [1], [4] suggested a normal Cartesian to Polar transform
that remaps each pixel in iris area into a pair of polar coordinates
(r, θ) where r and θ is on interval [0; 1] and [0; 2π] respectively.
The remapping of the iris image I(x,y) from raw coordinates (x,y)
to the dimensionless non-concentric polar coordinate system (r, θ)
can be represented as:
These images have 512 64 pixels dimension and to value witch
part of this image are most significative the iris image was cut into
8 equal parts of 64 ᄡ64 pixels. Each of the 8 networks is
associated with one of this parts and it is trained to became expert
of that part. This means that probably they will reach different
degrees of expertise since the different parts of ribbon have
different levels of noise. Bayesian conditions will be apply just for
try to detect their final degrees of expertise.
where x(r, θ) and y(r, θ) are defined as linear combinations of
both the set of pupillary boundary points (xp(θ), yp(θ)), and the set
of limbus boundary points along the outer perimeter of the iris
(xs(θ), ys(θ)):
x ( r , ) (1 r ) x p ( ) rxs ( )
(6)
The scheme of the work proposed in this paper consists of three
levels:
y ( r , ) (1 r ) y p ( ) ry s ( )
(7)
I ( x ( r , ), y ( r , ))
I (r , )
(5)
The normalized iris image can be displayed as a rectangular
image, with the radial coordinate (r) on the vertical axis, and the
angular coordinate (θ) on the horizontal axis. In such
representation, the pupillary boundary may be on the top of the
image, and the limbic boundary on the bottom.
A.
Iris recognition and cut
B. Training of the neural networks LVQi
C. Bayesian conditioning
Detailed description of these steps is given below.
Our system work whit a ribbon of 512 ᄡ64 pixel (512 pixel along
θ and 64 pixels along r), as in Figure 2.
5.1Iris Recognition
The first action of preprocessing is to determine iris edge includes
inner (with pupil) and outer (with sclera) edges Figure 1(c). Both
the inner boundary and the outer boundary of a typical iris can
approximately be taken as circles but these two circles are usually
not concentric.
Figure 2. Iris Polar Transformation Edge Detection
This image are now cut in 8 square ( 64 ᄡ64 pixel) as
showed in Figure 3, and with these new images we training
the respectively 8 LVQ neural networks.
(a)
(b)
(c)
Figure 1. Edge Detection
The Canny method [23] was applied to these images for edge
detection. The edge image was then thresholded. This operation
may produce on edge image broken points, spurious edges, and
various thicknesses. This image was cleaned using some
morphological operation, like clean random dots, remove small
edge lines and the broken lines are connected via the “close”
procedure of binary morphology. Figure 1(b) is the edge image
after these procedures. There is a clear circle in the edge image
that represents the outer edge of the pupil (the inner boundary of
the iris). The edge above and below the circle are the edges of the
eyelids and eyelashes. Then the Hough circle transform [24] was
applied to find the best circle and to estimate the circle
parameters (centre (x0, y0) and radius r0) for the pupillary and iris
boundary, as depicted in Figure 1(c).
When the iris region is successfully segmented, the next stage is
to find a transformation which can project the iris region into a
fixed two dimensional areas in order to be arranged for
comparison process. The normalization process projects iris
region into a constant dimensional ribbon so that two images of
the same iris under different conditions have characteristic
features at the same spatial location.
Figure 3. Iris Ribbon Cut
5.2Neural Network Training
LVQ is a supervised neural network that uses class information to
move the weight vector slightly, so as to improve the quality of
the classifier decision regions.
The input neurons are as many as the input iris image pixel
64 ᄡ64 of the training pattern. Experimental evidence show that it
is sufficient to fix the definitions of each class to the first
neighbourhood so we have nine nodes for each class: one
centroid and eight neighbours. In conclusion, the output layer is
made of 108 nodes.
The Training set is composed of 12 images (left eye, taken from
CASIA database), one from each subject to classified.
The learning phase, based on Eq. 1,2, evolves until a maximum of
150.000 epochs using a learning rate calculated with the following
equation:
(t ) e
( t)
(8)
where (t ) decreases monotonically with the number of
iterations t ( 0, 35 and 0, 0000009 , values obtained
after a series of tests to optimize networks).
To test our work, we have taken 5 different images of each
subject: whit these 60 images we have built the “Test Set” taken
100 random subjects from this set of images.
Table 2. Bayesian Conditioning results
Neural
Network
LVQ1
Reliability
Lay1
76,28 %
Reliability
Lay1-2
88,45 %
Reliability
Lay1-2-3
89,93 %
LVQ2
71,35 %
77,32 %
85,42 %
LVQ3
73,08 %
79,35 %
86,63 %
LVQ4
78,01 %
87,32 %
88,27 %
LVQ5
66,35 %
77,17 %
82,71 %
LVQ6
53,28 %
62,58 %
72,78 %
LVQ7
66,36 %
75,54 %
81,36 %
LVQ8
66,65 %
77,49 %
83,57 %
The performance obtained from the training set for each network
is 100% of true identification. While the performance obtained
from the test set are showed in Table 1.
5.3Bayesian Conditioning
In this model of ANN, each node is more or less associated
(Euclidean distance) to a subject of the Training set. During the
test, each node of each network has a distance associated with the
input. As a response of the network have taken the 3 nodes closest
to the input and with this nodes we have make 3 layers (Lay1,
Lay1-2, Lay1-2-3). The LVQ networks always does not agree in
their responses, in some cases one or more of them recognize a
subject instead of another (presence of noise), to overcome these
situations of disagreement between the networks has introduced
the Bayesian conditioning [8], [9], [10], that is used to find
maximally consistent subsets of statements produced by LVQ
networks, eliminating all information with low credibility,
selecting only the statements made by the networks with greater
reliability (Eq. 3, 4).
Table 1. Test set performance of LVQ networks
LVQ1
True
Identification
88,3 %
False
Identification
11,7 %
LVQ4
86,7 %
13,3 %
LVQ3
83,3 %
16,7 %
LVQ2
81,7 %
18,3 %
LVQ8
81,7 %
18,3 %
LVQ5
80 %
20 %
LVQ6
76,7 %
23,3 %
LVQ7
76,7 %
23,3 %
Average value
81,89 %
18,11 %
Neural Network
Initially all networks have the same reliability, for every conflict
the networks that fall into minority lose credibility than most
other, in this way with every new images the reliability of
networks rebase.
The results obtained from the Test set by the application of
Bayesian Conditioning are showed in Table 2.
6.EXPERIMENTAL RESULTS
From each random subject of our Test set (100 random subject
from a set of 60 images), all the LVQ networks produce their
statements (3 nodes). Calculated the reliability for each network
(as showed in Table 2), the identity of the subject is established
through the method of Inclusion based [8].
To evaluate the results obtained from our work, we have applied
Daugman’s algorithm on the same Test set. However, this means
that we have compared our work with a re-implementation of
Daugman’s algorithm as described in his earliest publications.
Thus the used algorithm may not be exactly the same of Daugman
and may not give the same performance on the same dataset.
6.1Inclusion Based
The algorithm of Inclusion based sort and select the elements of
the set of conflict generated by networks B (B = {B1, …, Bn }).
The Inclusion based method eliminates always the least credible
one among conflicting pieces of knowledge.
Let B ' B '1 �K �B ' n and B '' B ''1 �K �B '' n two consistent
subsets of B where Bi ' B '
B ''
Bi and B ''i B ''ᄡ Bi , than
B ' iff there exists a stratum i such that B 'i
any j<i, B ' j
B ''i and for
B '' .
j
6.2Daugman’s Approach
Given an image of the eye, Daugman’s work approximated the
pupillary and limbic boundaries of the eye as circles. Thus, a
boundary could be described with three parameters: the radius r,
and the coordinates of the center of the circle, x0 and y0. He
proposed an integro-differential operator for detecting the iris
boundary by searching the parameter space [1].
The next step to describe the features of the iris in a way that
facilitates comparison of irises is the introduction of Polar
transform [1], [4]. After this transformation, Daugman uses
convolution with 2-dimensional Gabor filters to extract the texture
from the normalized iris image. In his system, the filters are
“multiplied by the raw image pixel data and integrated over their
domain of support to generate coefficients which describe, extract,
and encode image texture information.” [25].
To match the texture of an image against the stored representation
of other irises, Daugman chose to quantize each filter’s phase
response into a pair of bits in the texture representation. Each
complex coefficient was transformed into a two-bit code: the first
bit was equal to 1 if the real part of the coefficient was positive,
and the second bit was equal to 1 if the imaginary part of the
coefficient was positive. Thus after analyzing the texture of the
image using the Gabor filters, the information from the iris image
was codified in a 256 byte (2048 bit) binary code (Iris code).
Daugman to compare the iris codes uses a metric called Hamming
distance, which measures the fraction of bits for which two iris
codes disagree. The minimum computed normalized Hamming
distance is assumed to correspond to the correct alignment of the
two images.
6.3Matching the two methods
In Table 3 is showed the results obtained form the application of
the Bayesian conditioning with Inclusion based and Daugman’s
algorithm.
Table 3. Results
Inclusion Based
Lay1
Lay1-2
Lay1-2-3
86,67 %
88,33 %
91,67 %
LVQ6 (76,7 %)
LVQ5 (66,35 %)
LVQ7 (75,54 %)
LVQ7 (81,36 %)
LVQ7 (76,7 %)
LVQ6 (53,28 %)
LVQ6 (62,58 %)
LVQ6 (72,78 %)
The results contained in Table 3, show how the use of multiple
levels (Lay1, Lay1-2, Lay1-2-3) allow to achieve better results for
recognition.
Future developments of this work will be: the optimization of the
parameters used by the neural network (LVQ) for the training, a
study on other types of neural networks to be used as experts to
improve results, and the implementation of a system for
recognizing the face of a person.
8.REFERENCES
Daugman
93,33 %
7.CONCLUSION
From Table 1 we can see that the neural networks are unable to
identify 100% of their subjects, because of the presence of noise.
In particular, some networks are more subject to noise than others,
and then only using neural networks are not able to recognize the
subject, even if they have an average recognition rate of 81.89%.
This paper shows how in spite of some LVQ networks, which
constitute our group of experts, are not always unanimous to the
recognizing of subject, the application of a Bayesian conditioning
algorithm on this set of neural networks is able to obtain
interesting results in the identification of subjects.
Comparing the results obtained by neural networks, which is
already known the accuracy or not of response (Table 1) with the
reliability of networks calculated through Bayesian conditioning,
(Table 2). In the latter case, the reliability of these values are
totally disconnected from knowledge of the subject (unknown to
the Bayesian conditioning) under review and obtained exclusively
on the basis of the processing of conflicts. The results of this
comparison are shown in Table 4, in which neural networks were
ordered on the basis of their percentages, so that the top networks
with the highest rates and in the low those with the lowest
percentages. From this table we can see how with different
percentages, the final order of the networks on the basis of the
results is very similar, showing in this way as the networks are
less susceptible to noise are also more reliable.
Table 4. Comparison results
Neural
Network
LVQ1 (88,3 %)
Reliability
Lay1
LVQ4 (78,01 %)
Reliability
Lay1-2
LVQ1 (88,45 %)
Reliability
Lay1-2-3
LVQ1 (89,93 %)
LVQ4 (86,7 %)
LVQ1 (76,28 %)
LVQ4 (87,32 %)
LVQ4 (88,27 %)
LVQ3 (83,3 %)
LVQ3 (73,08 %)
LVQ3 (79,35 %)
LVQ3 (86,63 %)
LVQ2 (81,7 %)
LVQ2 (71,35 %)
LVQ8 (77,49 %)
LVQ2 (85,42 %)
LVQ8 (81,7 %)
LVQ8 (66,65 %)
LVQ2 (77,32 %)
LVQ8 (83,57 %)
LVQ5 (80,0 %)
LVQ7 (66,36 %)
LVQ5 (77,17 %)
LVQ5 (82,71 %)
[1] Daugman, J.G. High confidence visual recognition of
persons by a test of statistical independence, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, vol. 15, no. 11, 1993, pp. 1148–1161.
[2] Wildes, R.P. Asmuth, J. C. Green, G. L. et al., A
machinevision system for iris recognition, Machine
Vision and Applications, vol. 9, no. 1, 1996, pp. 1–8.
[3] Sanchez-Avila, C. and Sanchez-Reillo, R. Iris-based
biometric recognition using dyadic wavelet transform,
IEEE Aerospace and Electronic SystemsMagazine,
vol. 17, no. 10, 2002, pp. 3–6.
[4] Daugman, J.G. Demodulation by complex-valued
wavelets for stochastic pattern recognition,
International Journal of Wavelets, Multiresolution,
and Information Processing, vol. 1, no. 1, 2003, pp.
1–17.
[5] Tisse, C. Martin, L. Torres, L. and Robert, M. Person
identification technique using human iris recognition,
in Proceedings of the 15th International Conference
on Vision Interface (VI ’02), Calgary, Canada, May
2002, pp. 294–299.
[6] Lim, S. Lee, K. Byeon, O. and Kim, T. Efficient iris
recognition through improvement of feature vector
and classifier, ETRI Journal, vol. 23, no. 2, 2001, pp.
61–70.
[7] Kohonen, T. Learning vector quantization, in SelfOrganising Maps, Springer Series in Information
Sciences. Berlin, Heidelberg, New York: SpringerVerlag, 1995, 3rd ed.
[8] Dragoni, A.F. Belief revision: from theory to practice,
The Knowledge Engineering Review, 2001, pp.
147-179.
[9] Dragoni, A.F. Giorgini, P. and Nissan, E. Belief
revision as applied within a descriptive model of jury
deliberations, Information and Communications
Technology Law, 2001, pp. 53-65.
[10] Dragoni, A.F. Animali, S. Maximal Consistency and
Theory of Evidence in the Police Inquiry Domain, in
CYbernetics and Systems: An International Journal,
vol. 34, n. 6-7, Taylor & Francis, 2003, pp. 419-465.
[11] CASIA Iris image database, institute of automation
(IA). Beijing, China: Chinese Academy of Sciences.
http://www.cbsr.ia.ac.cn/IrisDatabase.htm.
[12] Bertillon, A. La couleur de l’Iris, Rev. Of Science, vol.
36, no.3, France, 1885, pp. 65-73.
[13] Wildes, R.P. Iris recognition: an emerging biometric
technology, Proceedings of the IEEE, vol. 85, no. 9,
1997, pp. 1348–1363.
[14] Boles, W. and Boashash, B. A human identification
technique using images of the iris and wavelet
transform, IEEE Transactions on Signal Processing,
vol. 46, no. 4, 1998, pp. 1085–1088.
[15] Sanchez-Reillo, R. and Sanchez-Avila, C. Iris
recognition with low template size, in Proceedings of
the 3rd International Conference on Audio- and
Video-Based Biometric Person Authentication
(AVBPA ’01), Halmstad, Sweden, June 2001, pp. 324–
329.
[16] Ma, L. Wang, Y. and Tan, T. Iris recognition using
circular symmetric filters, in Proceedings of the 16th
International Conference on Pattern Recognition, vol.
2, Quebec City, Quebec, Canada, August 2002, pp.
414–417.
[17] Ma, L. Tan, T. Wang, Y. and Zhang, D. Personal
identification based on iris texture analysis, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, vol. 25, no. 12, 2003, pp. 1519– 1533.
[18] Bae, K. Noh, S.I. and Kim, J. Iris feature extraction
using
independent
component
analysis,
in
Proceedings of the 4th International Conference on
Audio- and Video-Based Biometric Person
[19]
[20]
[21]
[22]
[23]
[24]
[25]
Authentication (AVBPA ’03), Guildford, UK, June
2003, pp. 838–844.
Nam, K.W. Yoon, K.L. Bark, J.S. and Yang, W.S. A
feature extraction method for binary iris code
construction, in 12 EURASIP Journal on Advances in
Signal Processing Proceedings of the 2nd
International Conference on Information Technology
for Application (ICITA ’04), Harbin, China, January
2004.
Ma, L. Tan, T. Wang, Y. and Zhang, D. Efficient iris
recognition by characterizing key local variations,
IEEE Transactions on Image Processing, vol. 13, no.
6, 2004, pp. 739–750.
Linde, Y. Buzo, A. Gray, R.M. An algorithm for
vector quantizer design, IEEE Trans. Commun.
COM-28, 1980, pp. 84–95.
Gray, R.M. Vector quantization, IEEE ASSP Mag. 1,
1984, pp. 4–29.
Canny, J. A computational approach to edge detection,
IEEE “Methods and means to recognize complex
patterns”, Transactions on Pattern Analysis and
Machine Intelligence, PAMI-8, 1986, pp. 679-714.
Hough, P.V.C. Methods and means to recognize
complex patterns, U.S. Patent 3,069,654, 1962.
Daugman, J. Biometric personal identification system
based on iris analysis, U.S. Patent 5,291,560, March
1994.