Artemis 12

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Person Identication Using Full-Body Motion

and Anthropometric Biometrics


from Kinect Videos
Brent C. Munsell
1
, Andrew Temlyakov
2
, Chengzheng Qu
2
, and Song Wang
2
1
Clain University, Orangeburg, SC. 29115
2
University of South Carolina, Columbia, SC. 29208
bmunsell@claflin.edu, {temlyaka,quc,songwang}@cec.sc.edu
Abstract. For person identication, motion and anthropometric bio-
metrics are known to be less sensitive to photometric dierences and
more robust to obstructions such as glasses, hair, and hats. Existing gait-
based methods are based on the accurate identication and acquisition of
the gait cycle. This typically requires the subject to repeatedly perform a
single action using a costly motion-capture facility, or 2D videos in sim-
ple backgrounds where the person can be easily segmented and tracked.
For person identication these manufactured requirements limit the use
gait-based biometrics in real scenarios that may have a variety of actions
with varying levels of complexity. We propose a new person identica-
tion method that uses motion and anthropometric biometrics acquired
from an inexpensive Kinect RGBD sensor. Dierent from previous gait-
based methods we use all the body joints found by the Kinect SDK to
analyze the motion patterns and anthropometric features over the en-
tire track sequence. We show the proposed method can identify people
that perform dierent actions (e.g. walk and run) with varying levels of
complexity. When compared to a state-of-the-art gait-based method that
uses depth images produced by the Kinect sensor the proposed method
demonstrated better person identity performance.
1 Introduction
The development of accurate and ecient person identication methods is a ma-
jor area of research in the computer vision, biometric, surveillance, and security
communities. In general, person identication is typically achieved by measuring
and analyzing biologic features, or biometrics, where a biometric is some distin-
guishing characteristic used for recognition. For example, high impact research
has been conducted that identify people with similar facial, ngerprint, or iris
biometrics [16] in 2D images or videos. To date, person identication systems
that incorporate one, or more [7, 8], of these biometrics tend to dominate the
community. However, the person being identied is usually required to physi-
cally touch the sensor, or cooperate with the sensor when acquiring data. Also,
in real imagery the accurate identication and location of these biometrics can
A. Fusiello et al. (Eds.): ECCV 2012 Ws/Demos, Part III, LNCS 7585, pp. 91100, 2012.
c Springer-Verlag Berlin Heidelberg 2012
92 B.C. Munsell et al.
be sensitive to photometric dierences and obstructions (e.g., glasses, hair, hats)
which may severely degrade recognition performance.
To overcome these limitations, person identication methods that use a lower
extremity (i.e. below the hips) gait biometric have been proposed that attempt
to identify people with similar lower extremity gait kinematics [911], stride and
cadence [12, 13], and mechanics [14, 15]. Even though these gait-based methods
are less restrictive and more robust to obstructions they do require the accurate
detection of the motion region, and the coherent segmentation of the object
(i.e. person boundary or silhouette) over a specied time sequence to isolate the
gait cycle. In [16] the isolated gait cycle is used to construct normalized (i.e.
temporally averaged) energy volumes, in [17] the isolated gait cycle is used to
align the motion samples, and in [18] gait specic features are normalized using
the isolated gait cycle . In real video footage that may have a variety of actions
with varying levels of complexity, isolating the gait cycle can be computationally
expensive and error prone. If the gait cycle is not properly detected and isolated,
gait alignment and normalization errors are likely to be introduced, which may
result in very poor recognition performance.
We present a novel person identity method that uses full-body (upper and
lower extremity) motion and anthropometric biometrics derived from Kinect
videos. The Kinect sensor was chosen because it is an inexpensive, easy-to-use,
and accurate 3D motion sensor that is not sensitive to photometric dierences.
The major contribution is three-fold: 1. We introduce a new motion biometric
that examines the coordinated motion of the entire body when a person performs
a basic action, 2. The derived motion biometric examines the periodic motion
of the tracked joints over the entire track sequence making the proposed mo-
tion biometric more robust than those derived by gait-based methods that are
sensitive to gait cycle isolation errors, and 3. We introduce an integrated anthro-
pometric biometric to boost biometric authentication if the motion biometric is
unable to distinguish two people with similar full-body motion patterns. Unlike
existing methods that may use anthropometric data to improve object track-
ing [18] or pose estimation [17] prior to biometric authentication, the proposed
motion and anthropometric biometrics are combined to form a unied person
identity classier.
In the experiments, challenging scenarios are performed that study the subtle
dierence in motion among 10 dierent people when they perform 2 basic actions
(walking and running). In total we collect 100 short
1
Kinect videos (40 videos for
training and 60 for testing) resulting in an average ROC Equal Error Rate and
Cumulative Match Curve Rank-1 identication rate of 13% and 90% respectively.
Using the same Kinect data set, the proposed method is compared to the Gait
Energy Volume (GEV) [16] a state-of- the-art lower extremity gait-based method
and the experiments show our method out performs GEV. The remainder of
this paper is organized as follows: Section-2 the proposed method is described
in detail, Section-3 experiments are performed that evaluate person identity
performance, and in Section-4 a brief conclusion is given.
1
Roughly 30 sec of footage at 30 fps
Full-Body Motion and Anthropometric Biometrics 93
2 Proposed Method
In this section we develop new classication methods that attempt to identify
the basic action and identity of unknown persons in Kinect videos. Concep-
tually, the proposed method is accomplished using the two-stage classication
system illustrated in Fig. 1(b). In the rst stage, a test Kinect video with an
Run Action
c
1
c
2
c
n
c
1
c
2
c
n
Identity Classifier
Joint 17 = Hip Left
Joint 4 = Head
Joint 5 = Shoulder Right
Joint 6 = Elbow Right
Joint 7 = Wrist Right
Joint 8 = Hand Right
Joint 9 = Shoulder Left
Joint 10 = Elbow Left
Joint 2 = Spline
Joint 3 = Shoulder Center
Joint 11 = Wrist Left
Joint 12 = Hand Left
Joint 13 = Hip Right
Joint 14 = Knee Right
Joint 15 = Ankle Right
Joint 16 = Foot Right
Joint 1 = Hip Center
Joint 18 = Knee Left
Joint 19 = Ankel Left
Joint 20 = Foot Left
16 20
1
2
4
3
5 9
6 10
7
8
12
11
13
14
15 19
18
17
Kinect Video
Test
Action
Classifier
Unknown Person
(a) (b)
Identity Classifier
Walk Action
Fig. 1. (a) The 20 skeletal joints found using the Kinect SDK. (b) Two-stage classica-
tion system where the rst stage recognizes the action and the second stage recognizes
the identity of an unknown person in a Kinect video.
unknown action is input into a trained action classier that is capable of recog-
nizing two basic actions: walking and running. To accomplish this, a training set
of Kinect videos are collected that capture subjects, with dierent gender, exe-
cuting two basic actions. For each frame, in each training video, the normalized
3D locations of the 20 skeletal joints, illustrated in Fig. 1(a), are projected into
a high-dimensional space and hyperplanes that best separate the two actions
are found by a Support Vector Machine (SVM). Given a test Kinect video the
learned hyperplanes allow us to classify each frame in the video and then the
unknown action is recognized using a majority vote algorithm.
In the second stage, the test Kinect video is input into an identity classier
matched to the recognized action. For each basic action, we train n human
identity classiers by considering motion patterns and human anthropometric
measures for n dierent people. In particular, the motion biometric is trained
using Kinect videos that describe the radial, azimuth, and elevation motion
patterns of 20 skeletal joints performing the same action multiple times. Likewise,
the anthropometric biometric is trained using the same Kinect videos, however
this biometric is a statistical model that describes the proportions between the
20 skeletal joints. Finally, n identity costs c
1
, c
2
, . . . , c
n
are calculated and the
unknown person in the test Kinect video is recognized by nding the score with
the smallest value.
2.1 Action Classication
Let V
w
= {V
w
i
; i = 1, 2, . . . , m} be a set of walking and V
r
= {V
r
i
; i =
1, 2, . . . , m} be a set of running training Kinect videos, where V
w
i
=
(F
w
i1
, F
w
i2
, . . . , F
w
in
) is an ordered sequence of image frames that capture vari-
ous people performing normal walk actions. Using each frame in V
w
i
a 60 n
94 B.C. Munsell et al.
dimension skeletal matrix S
w
i
= [s
i1
s
ik
s
in
] is constructed, where s
ik
is
a column vector that denes the 3D locations of the 20 skeletal joints in the kth
frame
2
. This is repeated for each of the m videos in V
r
, and the resulting skeletal
matrices are concatenated to form one matrix S = [S
w
1
S
w
2
S
w
m
S
r
1
S
r
2
S
r
m
]
with dimension 60 2nm. The combined skeletal matrix S is decomposed

S = UD
T
into a set of matrices using singular value decomposition, where
D is a 2nm 2nm dimension matrix of right singular vectors, is a 60 2mn
dimension diagonal matrix of singular values, and U is a 60 60 dimension ma-
trix of left singular vectors. Since most of the singular values are very small or
zero, only the 10 largest singular values are considered. Therefore the reduce
space is now represented by

that has dimension 10 10,

D that has dimen-
sion 2nm 10, and

U that has dimension 60 10. The reduced dimension row
vectors in

D are then used to train a multi-class SVM, and because the data is
not linearly separable a non-linear Gaussian Radial Basis Function kernel learns
the hyperplanes used in classication.
Given a test Kinect video V the reduced dimension matrices

and

U are
used to insert each frame {F}
n
k=1
into the space spanned by the row vectors
in

D using

d
k
= s
T
k

1
, where vector s
k
contains the origin translated 3D
locations of the 20 skeletal joints in frame F
k
. Once each frame in V is inserted
into the high dimensional space they are labeled as a walk or run action by the
trained SVM. The action is then recognized using a majority vote algorithm by
determining the action that appears the most among all n frames.
2.2 Identity Classication
Given a test Kinect video of an unknown person X performing a recognized
action, the identity cost
min
P
{
M
(X, P)
A
(X, P) }
is calculated for each person P in the training data set, where
M
(X, P) is the
motion dierence and
A
(X, P) is the anthropometric dierence. The identity
of person X is recognized when the person with the least cost is found.
The motion biometric is trained as follows: Let V
w
= (F
w
1
, F
w
2
, . . . , F
w
n
) be a
training Kinect video that captures a person performing the walk action. Like
Section-2.1, S
w
= [s
1
s
k
s
n
] is constructed using each frame in the
video, however column vector s
k
now denes the r radius, azimuth, and
elevation values of the 20 skeletal joints in the kth frame
3
, which is then seperated
into three dierent 20 n dimension matrices namely: M
r
the radius matrix,
M

the elevation matrix, and M

the azimuth matrix. Specically, row vector


r
1
= (r
11
, r
12
, . . . , r
1n
) denes the radial motion,
1
= (
11
,
12
, . . . ,
1n
) denes
2
Translation dierences are removed by picking joint-2 as the origin (0, 0, 0) and the
remaining 19 joints are translated relative this joint. E.g., s
ik
= ((x
1k
x
2k
), (y
1k

y
2k
), (z
1k
z
2k
), . . . , (x
20k
x
2k
), (y
20k
y
2k
), (z
20k
z
2k
))
T
3
s
k
= (r
1k
,
1k
,
1k
, . . . , r
20k
,
20k
,
20k
)
T
Full-Body Motion and Anthropometric Biometrics 95
the angular azimuth motion, and
1
= (
11
,
12
, . . . ,
1n
) denes the angular
elevation motion of joint-1 when the person executes the walk action.
For each motion matrix, a motion histogram is constructed using Algorithm-1.
For example, given M
r
and k
f
, in line-2 h = (h
1
, h
2
, . . . , h
N/2
) a N/2 dimension
vector is created, where F
s
is the sampling frequency and h
1
corresponds to
F
s
/N = 0.5 Hz. Likewise, h
2
= 1 Hz, h
3
= 1.5 Hz and so forth. In line-5 a
N-point Discrete Fourier Transform (DFT) is performed using the radial values
in row i, and then in line-6 a k
f
dimension vector b of bin values are found for
the top k
f
frequencies that have the largest magnitude (sorted in descending
order). In lines 7-9, for each frequency bin the corresponding histogram bin is
incremented by one, and then on line line-11 the radial motion histogram is
normalized by n the total number of rows in the motion matrix.
Algorithm 1. Histogram( M, k
f
)
1: N 2048
2: F
s
N/2
3: h zeros(F
s
)
4: while i n do
5: Y | DFT(M(i, :), N) |, i = i + 1
6: b sort(Y, k
f
)
7: while j k
f
do
8: h[b(j)] h[b(j)] + 1, j = j + 1
9: end while
10: end while
11: h = h/n
For each person in the training data set 6 motion histograms are computed
using Algorithm-1 (i.e. 3 histograms for each action). Given X an unknown
person not in the training data set, and P a known person in the training data
set the motion dierence is calculated using

M
(X, P) = 1
R(h
x
r
, h
p
r
) + R(h
x

, h
p

) + R(h
x

, h
p

)
3
where R() is the correlation coecient, (h
x
r
, h
x

, h
a

) are the motion histograms


for X, and (h
p
r
, h
p

, h
p

) are the motion histograms for P. The motion dierence


has a value in [0 1], where a value of 0 indicates the two people have identical
motion patterns for the recognized action.
The anthropometric biometric is trained as follows: Let V
w
=
(F
w
1
, F
w
2
, . . . , F
w
n
) be a training Kinect video that captures a person performing
the walk action. For each frame in the video a 20 20 dimension joint propor-
tion matrix is constructed using the (x, y, z) locations of the 20 skeletal joints.
Specically, joint proportion is calculated by p
ab
=
d(a,b)
d
total
, where d(a, b) is the
96 B.C. Munsell et al.
Euclidean distance between joints a and b, and d
total
= d(1, 4)+d(3, 8)+d(3, 12)+
d(1, 16) + d(1, 20) is the total skeletal distance
4
The resulting joint proportion matrices are concatenated to form one matrix
J = [J
w
1
J
w
2
J
w
n
] that has dimension 20 20n. A statistical model N( p, D)
is constructed using the proportions in J, where p is a 20 dimension vector that
describes the mean proportions for all 20 joints, and D is a 20 20 covariance
matrix that describes proportion variation for all 20 joints. For each person in
the training data set 2 anthropometric statistical models are constructed (i.e.
one model for each action). Given X an unknown person not in the training data
set, and P a known person in the training data set the anthropometric dierence
is calculated using the well known KL-distance measure

A
(X, P) =
1
2
( log
|D
p
|
|D
x
|
+ Tr(D
1
p
D
x
) +
( p
x
p
p
)
T
D
1
p
( p
x
p
p
) d)
where N( p
x
, D
x
) is the anthropometric statistical model for unknown person
X, and N( p
p
, D
p
) is the learned anthropometric statistical model for person
P, and d = 20 the dimension of the covariance matrix. In general, a small
anthropometric dierence value indicates the two people have very similar joint
proportions for the recognized action.
3 Experiments
In this section experiments are performed that evaluate the proposed systems
ability to correctly classify unknown people and their action in Kinect videos.
The performance of the proposed method is compared to the Gait Energy Vol-
ume (GEV) method [16]. In general, GEV is the 3D extension of the 2D Gait
Energy Image (GEI) [19]. Using the depth images, the tracked human silhouettes
are segmented and the segmentation results are used to isolate each gait cycle
in the video sequence. For each isolated gait cycle the results are aligned and
averaged to form the GEV. Principal Component Analysis (PCA) and Multiple
Discriminant Analysis (MDA) are used to nd a reduced dimension feature vec-
tor that well describes the GEV. This unknown feature vector is compared to
known feature vectors using a distance based measurement to recognize the iden-
tity of the tracked person in the Kinect video. In these experiments we manually
identied the gait cycle and then used the recommended settings to perform
PCA and MDA dimensionality reduction.
In Section 3.1 we describe the Kinect data sets used to train the action and
identity classiers, and in Section 3.2 we describe the Kinect data sets used to
test the accuracy of the system and for performance comparison. Both data sets
were collected using a Kinect sensor mounted on a movable cart that faced the
person performing the action. During data collection the distance between the
apparatus and the subject was roughly 1.5 to 3 meters. Lastly, we evaluate the
4
For example, p
18
= ( d(1, 2) + d(2, 3) + d(3, 5) + d(5, 6) + d(6, 7) + d(7, 8) )/d
total
.
Full-Body Motion and Anthropometric Biometrics 97
performance of the action and person identity classiers in Section 3.3 using the
well known Receiver Operating Characteristic (ROC) curve and the Cumulative
Match Curve (CMC) [20]. The ROC is also used to evaluate the sensitivity of
the method when: 1. only one biometric is used for identity classication, and
2. the number of frequencies k
f
(see Section 2.2) used to construct the (r, , )
motion histograms are changed over a range of values (Note: This is the only
free parameter in the proposed person identication system).
3.1 Training Data
The training data set included 10 people, 6 males and 4 females, where each
person executed each of the 2 basic actions 2 times. For instance, each person
has 4 Kinect videos: 2 walking and 2 running. In total, the training data set has
40 videos. Example 3D skeletons found by the Kinect sensor that illustrate the 2
basic actions are shown in Fig. 2. The activity classier was trained using all 40
videos. For each walk identity classier the motion and anthropometric biomet-
rics were trained using the two collections for that person. Likewise, for each run
identity classifer the motion and anthropometric biometrics were trained using
the two collections for that person. For the 6 male subjects the age range was
between 25 and 40 years old, and the height range was roughly between 1.73
and 1.8 meters. For the 4 female subjects the age range was between 25 and 37
years old, and the height range was roughly between 1.55 to 1.6 meters.
W
a
l
k
R
u
n
Fig. 2. Example training data. Top row: example skeletons of a person performing a
normal running action. Bottom row: example skeletons of a person performing a normal
walking action.
3.2 Test Data
Using the same 10 people in the training data set, each person in the test data
set executed each of the 2 basic actions 3 times, i.e. each person has 6 Kinect
videos: 3 walking and 3 running. In total, the test data set has 60 videos. To make
the test data set challenging, each person was asked to perform the additional
actions: First Collection (least challenging) wear a backpack that contained 20
lbs of books, Second Collection (moderately challenging) wear the same 20 lb
backpack, and carry an object in their right hand, and Third Collection (most
challenging) perform the slow moving S motion shown in Fig. 3. In general,
these collections simulate real scenarios that may be found in public gathering
areas such as airports, train stations, or shopping malls.
98 B.C. Munsell et al.
W
a
l
k
R
u
n
Fig. 3. Example Third Collection testing data. Top row: example skeletons of a person
performing the S the running action. Bottom row: example skeletons of a person
performing the S walking action.
3.3 Results
Action classication performance using majority vote was 100% for both actions,
where action classication performance per video ranges from 50.30% to 100%
with the average being 93.47%. The ROC curves in Fig. 4(a) show the Verica-
tion Rate (VR) and Equal Error Rate (EER) performance for our method and
GEV. This gure also shows the CMC Rank-1 through 6 performance for our
method and GEV. For both actions the EER and CMC Rank-1 performance of
our method is better than GEV. For the walk and run actions our method shows
a 11% and 4% EER increase in performance respectively. For the walk action
our method is 90% accurate by Rank-3, whereas GEV is still hovering around
88% by Rank-6, and for the run action the Rank-1 performance of our method
is 90% while GEV does not achieve 90% until Rank-3. The ROC curves in this
gure also show the motion biometric is not overly sensitive to k
f
the number
of frequencies used to construct the motion histograms. In fact, the ROC curves
show roughly the same performance when k
f
is three or ve times greater than
10. This suggests the joint motion patterns may be adequately described by the
top 10 frequencies with the largest magnitude.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
RUN ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
(b) (a)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
RUN ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
k
f
=20
k
f
=50
GEV
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
WALK ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
k
f
=20
k
f
=50
GEV
1 2 3 4 5 6
0
10
20
30
40
50
60
70
80
90
100
WALK CMC
Rank
C
u
m
u
la
t
iv
e

A
c
c
u
r
a
c
y

(
%
)


k
f
=10
GEV
1 2 3 4 5 6
0
10
20
30
40
50
60
70
80
90
100
RUN CMC
Rank
C
u
m
u
la
t
iv
e

A
c
c
u
r
a
c
y

(
%
)


k
f
=10
GEV
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
RUN ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
WALK ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
WALK ROC Curve
False Alarm Rate
V
e
r
if
ic
a
t
io
n

R
a
t
e


k
f
=10
EER=11%
EER=19%
EER=18%
EER=15%
EER=26%
EER=20% EER=20%
EER=22%
Fig. 4. For both actions (a) Top Row: VR and EER performance comparison between
our method (k
f
= 10, 30, 50) and GEV [16]. Bottom Row: CMC Rank-1 through 6
performance comparison between our method (k
f
= 10) and GEV. (b) Top Row: VR
and EER performance using only the motion biometric (k
f
= 10). Bottom Row: VR
and EER performance using only the anthropometric biometric (k
f
= 10).
Full-Body Motion and Anthropometric Biometrics 99
Figure 4(b) shows the VR and EER performance when only the motion and
anthropometric biometric is used by the identity classier. As seen in these
ROC curves, person identication is more accurate when both biometrics are
used by the identity classier. For the walk action the anthropometric EER
performance is slightly better than the motion biometric, which suggests the
anthropometric biometric guides the motion biometric. However, for the run
action the discriminative power of the motion biometric is high, requiring less
help from the anthropometric biometric.
Since the computation complexity of the SVD and SVM algorithms are
O(pq
2
+ p
2
q + q
3
) [21] and O(q
2
) [22] respectively, the computational complex-
ity of the action classier O(q
3
) where q = 2mn. The space complexity of
the action classier is O(q
2
), i.e. the size of the right singular value matrix D.
An analysis of Algorithm-1 shows the computational complexity of the identity
classier is O(nNlogN), and the space complexity of the identity classier is
O(n), i.e. the column dimension of the radius, azimuth, and elevation matrices.
On a 2.4GHz Intel Core 2 Quad CPU, the total time needed to train the ac-
tion classier was 32 min, and the time needed to train one identity classifer
was 30 ms.
4 Conclusion
In conclusion, a novel person identity method that uses full-body motion and
anthropometric biometrics derived from Kinect videos was presented. Dierent
form traditional gait-based methods that attempt to isolate and examine the gait
cycle in the video sequence, our method considers the entire track sequence and
examines the periodic motion of upper and lower extremity joints found by the
Kinect SDK that have the largest contribution to the action being performed.
Challenging test data sets where constructed that have a variety of basic actions
with varying levels of complexity. Experiments showed that the proposed method
has an average ROC EER of 13% and an average CMC Rank-1 identication rate
of 90%. Performance comparisons where conducted using a gait-based method
that uses depth images produced by the Kinect sensor. The results showed our
method to have better performance. Experiments were also conducted to assess
the individual sensitivities of the two biometrics, and the results suggest both
biometrics are needed for person identication. We also show the motion biomet-
ric is not overly sensitive to the number of frequencies used to build the motion
histograms.
References
1. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference
on Computer Vision and Pattern Recognition, pp. 586591 (1991)
2. Jain, A., Hong, L., Bolle, R.: Online ngerprint verication. IEEE Transactions on
Pattern Analysis and Machine Intelligence 19, 302314 (1997)
100 B.C. Munsell et al.
3. Ma, L., Tan, T., Wang, Y., Zhang, D.: Personal identication based on iris texture
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 25,
15191533 (2003)
4. Ross, A., Dass, S., Jain, A.: Fingerprint warping using ridge curve correspondences.
IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 1930 (2006)
5. Lu, X., Jain, A.: Deformation modeling for robust 3D face matching. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence 30, 13461357 (2008)
6. Pillai, J., Patel, V., Chellappa, R., Ratha, N.: Secure and robust iris recognition us-
ing random projections and sparse representations. IEEE Transactions on Pattern
Analysis and Machine Intelligence 33, 18771893 (2011)
7. Hong, L., Jain, A.: Integrating faces andngerprints for personal identication. IEEE
Transactions on Pattern Analysis and Machine Intelligence 20, 12951307 (1998)
8. Chang, K., Bowyer, K., Sarkar, S., Victor, B.: Comparison and combination of ear
and face images in appearance-based biometrics. IEEE Transactions on Pattern
Analysis and Machine Intelligence 25, 11601165 (2003)
9. Murase, H., Sakai, R.: Moving object recognition in eigenspace representation: gait
analysis and lip reading. Pattern Recognition Letters 17, 155162 (1996)
10. Cutler, R., Davis, L.S.: Robust real-time periodic motion detection, analysis, and
applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 22,
781796 (2000)
11. Boyd, J.E., Little, J.J.: Biometric Gait Recognition. In: Tistarelli, M., Bigun, J.,
Grosso, E. (eds.) Advanced Studies in Biometrics. LNCS, vol. 3161, pp. 1942.
Springer, Heidelberg (2005)
12. Gafurov, D., Helkala, K., Sndrol, T.: Biometric gait authentication using ac-
celerometer sensor. JCP 1, 5159 (2006)
13. Abdelkader, C.B., Davis, L., Cutler, R.: Stride and cadence as a biometric in
automatic person identication and verication. In: IEEE International Conference
on Automatic Face and Gesture Recognition, pp. 372377 (2002)
14. Campbell, L., Bobick, A.: Recognition of human body motion using phase space
constraints. In: International Conference on Computer Vision, pp. 624630 (1995)
15. Little, J., Boyd, J.E.: Recognizing people by their gait: The shape of motion.
Videre 1, 132 (1996)
16. Sivapalan, S., Chen, D., Denman, S., Sridharan, S., Fookes, C.B.: Gait energy
volumes and frontal gait recognition using depth images. In: International Joint
Conference on Biometrics (2011)
17. Gu, J., Ding, X., Wang, S., Wu, Y.: Action and gait recognition from recovered
3-D human joints. Trans. Sys. Man Cyber. Part B 40, 10211033 (2010)
18. Green, R.D., Guan, L.: Quantifying and recognizing human movement patterns
from monocular video images - part ii: Applications to biometrics. IEEE Transac-
tions on Circuits and Systems for Video Technology 14, 179190 (2003)
19. Han, J., Bhanu, B.: Individual recognition using gait energy image. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence 28, 316322 (2006)
20. Bolle, R.M., Connell, J.H., Pankanti, S., Ratha, N.K., Senior, A.W.: The relation
between the roc curve and the cmc. In: Proceedings of the Fourth IEEE Workshop
on Automatic Identication Advanced Technologies, pp. 1520 (2005)
21. Brand, M.: Incremental Singular Value Decomposition of Uncertain Data with
Missing Values. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV
2002, Part I. LNCS, vol. 2350, pp. 707720. Springer, Heidelberg (2002)
22. Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using second order informa-
tion for training support vector machines. Journal of Machine Learning Research 6,
18891918 (2005)

You might also like