A Real-Time In-Air Signature Biometric Technique Using A Mobile Device Embedding An Accelerometer
A Real-Time In-Air Signature Biometric Technique Using A Mobile Device Embedding An Accelerometer
A Real-Time In-Air Signature Biometric Technique Using A Mobile Device Embedding An Accelerometer
J. Guerra Casanova, C. S
anchez Avila,
A. de Santos Sierra, G. Bailador del
Pozo, and V. Jara Vera
Centro de Dom
otica Integral (CeDInt-UPM) Universidad Politecnica de Madrid
Campus de Montegancedo, 28223 Pozuelo de Alarc
on, Madrid
{jguerra,csa,alberto,gbailador,vjara}@cedint.upm.es
Abstract. In this article an in-air signature biometric technique is proposed. Users would authenticate themselves by performing a 3-D gesture
invented by them holding a mobile device embedding an accelerometer. All the operations involved in the process are carried out inside the
mobile device, so no additional devices or connections are needed to accomplish this task. In the article, 34 different users have invented and
repeated a 3-D gesture according to the biometric technique proposed.
Moreover, three forgers have attempted to falsify each of the original
gestures. From all these in-air signatures, an Equal Error Rate of 2.5%
has been obtained by fusing the information of gesture accelerations of
each axis X-Y-Z at decision level. The authentication process consumes
less than two seconds, measured directly in a mobile device, so it can be
considered as real-time.
Key words: Biometrics, gesture recognition, accelerometer, mobile devices, dynamic time warping, fuzzy logic.
Introduction
Nowadays most mobile devices provide access to Internet where some operations
may require authentication. Looking up the balance of a bank account, buying
a product in an online shop or gaining access to a secure site are some actions
that may be performed within a mobile phone and may require authentication.
In this mobile context, biometrics promises to raise again as a method to ensure
identities. Some works trying to join classical biometric techniques in a mobile
scenario have been already developed, based on iris recognition [1], face [2], voice
recognition [3] or multimodal approaches [4].
In this article, a new mobile biometric technique is proposed. This technique
is based on performing a 3-D gesture holding a mobile device embedding an
accelerometer [5]. This gesture is considered as an in-air biometric signature,
with information in axes X-Y-Z. This biometric technique may be considered as a
combination between behavioral and physical techniques, since the repetition of a
gesture in the space depends not only on the shape and the manner of performing
the in-air signature but also on physical characteristics of the person (length of
the arm, capability of turning the wrist or size of the hand holding the device).
This 3-D signature technique proposed is similar to traditional handwrittensignature [6], but adapted to a mobile environment.
In this proposal, feature extraction is directly performed within a mobile device without any additional device requirement. Besides, through this biometric
technique based on 3-D gestures, it is intended to perform all the authentication process inside the device, executing all the algorithms involved without any
additional device or server. Therefore, and due to the increasing process power
of mobile devices, this biometric technique would achieve an important requirement: real-time.
This article is divided in the following Sections. Firstly, Section 2 describes
the method of analysis of gesture signals involved in this study. Next, Section 3
details the in-air gesture biometric database created to support the experiments
of the article. Section 4 includes an explanation of the experimental work carried
out, as well as Time and Equal Error Rate Results obtained. Finally, in Section
5, conclusions of this work and future lines are introduced.
In this article, an algorithm based on Dynamic Time Warping [7] has been developed to analyze different signals, in order to elucidate whether a sample is
truthful or not. For that purpose, the algorithm tries to find the best possible alignment between two signals in order to correct little variations at the
performing of the gesture.
A score matrix is calculated for each point of both sequences [8], and later,
the path in this matrix that maximizes this score is obtained. Any vertical or
horizontal movement in this path implies adding a zero value in a sequence to
correct little deviations. The algorithm includes a fuzzy function in the score
equation [9] representing to what extend a user is able to repeat a gesture. The
score equation is shown in Equation 1.
si,j1 + h
(1)
si,j = max si1,j1 +
sii,j + h
where h is a constant, known as gap penalty in the literature [10], whose value is
obtained to maximize the overall performance and is a fuzzy decision function
that represents a Gaussian distribution:
= e
(x)2
2 2
(2)
where and x are the values of the previous points in base to whom the score
of the new points (i, j) are calculated. Finally, is a constant stating to what
extend two values are similar.
Despite a user performs the same gesture holding the mobile device in the
same way, there will be always some little variations on the speed and manner
the user performs his/her 3-D signature. This algorithm aligns a pair of signals,
correcting those little deviations without compensating high differences by including some zero values and interpolating them in order to maximize the overall
score function. As a result of this algorithm, signals length is duplicated.
When the optimal alignment of the signals is accomplished, Euclidean distance is calculated in order to measure the differences between aligned signals.
Consequently, a numerical value is obtained at the end of the analysis process;
the lower the value is, the more similar the analyzed signals are, and viceversa.
Database description
From those answers, it can be inferred that users had low difficulty in inventing and repeating a 3-D gesture with a mobile device. As biometric data are
acquired in a non-intrusive manner, users have evaluated the collectively of the
technique as very low [12]. Besides, users have felt secure and comfortable when
biometric characteristics have been extracted, so acceptability also receives a low
score.
Moreover, volunteers have been asked to compare the confidence of this in-air
gesture biometric technique proposed respect to iris, face, handwritten signature,
hand and fingerprint recognition techniques. In average, participants evaluated
the confidence of in-air gesture signature over handwritten signature, next to
face and hand recognition and far from iris and fingerprint.
On the other hand, a second session has been performed by studying the
videos recorded in the previous session. In this session, three different people
have tried to forge each of the 34 original in-air biometric signatures. Each
falsifier attempted to repeat each gesture seven times.
As a result of both sessions, 238 samples of truthful gestures (34 users x 7
repetition) and 714 falsifications (34 users x 3 forgers x 7 attempts) have been
obtained. An evaluation of the error rates of the technique has been developed
from all the samples of the database created. The experiments and results obtained are described in Section 4.
Experimental results
Three original samples of each gesture chosen randomly have been considered as
the 3D biometric signature template; the other four original samples represent
truthful attempts of verification that should be accepted. All impostor samples
symbolize false trials that should be rejected. Summarizing, Equal Error Rate
(EER) [13] have been calculated in this article from 136 (34 users, 4 accessing
samples each) truthful and 714 (34 users, 3 forgers, 7 samples each) impostor
access samples.
This technique is assessed as powerful whether not only good results of EER
are obtained, but also signal analysis carries a reasonable time to be considered
real-time. According to this, a reader should notice that the longer the signals,
the longer time to execute the algorithm. Furthermore, this growth in time is
not linear but exponential. On the other hand, if the number of performances
of the algorithm grows up with a constant length of signals, the total time to
complete the whole process increases linearly.
Each 3-D signature carries informations about the accelerations on each axis
when the gesture is performed. Three different biometric fusion strategies have
been tested: fusion at decision level, fusion at matching score and fusion at
feature extraction [14]. In this article, only the first strategy is explained since
best results have been obtained from it. Fusing information at decision level
implies to execute in parallel but separately the alignment algorithm of each
axis signal and calculate a unique comparison metric value from all of them.
The resulting comparison metric value for two gestures A and B is calculated
by Equation 3:
dA,B =
(3)
where dxA,B , dyA,B and dzA,B are the values obtained by aligning the signals
of each axis x, y and z separately and calculating their Euclidean distance by
Equation 4
deA,B =
2L
X
0
(A0x,i Bx,i
)2
(4)
i=1
where A and B are the two gestures of length L trying to be analyzed. A0e
and B e are the result of aligned the signals A and B corresponding to axis e.
Since the length of these aligned signals is 2L, the resulting value deA,B for each
axis e is obtained by calculating the differences between each point and from all
the length of the signals.
According to the proposed fusion information scenario, the algorithm is executed three times, one for each axis signal separately. The information is fused at
decision level by calculating the average of the result of each process of each axis
signal. With all these conditions, an Equal Error Rate of 2.5% has been obtained
(Figure 1). This value has been obtained as the intersection of False Acceptance
Rate (FAR) curve when falsifiers tried to forge the system, and False Rejection
Rate (FAR) curve obtained from the rejection error when truthful users tried to
access the system performing their own signature.
Let TE be the execution time of the alignment algorithm; which is the most
consuming time process in an authentication activity. Then, the time consumed
in this experiment for each comparison of two gesture samples is equivalent to
three times the execution of the algorithm with two signals of length L (3TE (L)).
This time has been measured in a mobile device (iPhone 3G) resulting to be 1.51
seconds in average. The calculation of this time has been obtained by the average
FRR (x,y,z)
FAR (x,y,z)
0.07
0.06
Error
0.05
0.04
EER = 2.5%
0.03
0.02
0.01
0
Comparison metric
and the axis of accelerations which carries more distinctive information may be
evaluated in order to reduce the length or the parts of the signals required to
obtain low Equal Error Rates so that consuming time would decrease.
References
1. ho Cho, D., Park, K.R., Rhee, D.W., Kim, Y., Yang, J.: Pupil and iris localization
for iris recognition in mobile phones. Software Engineering, Artificial Intelligence,
Networking and Parallel/Distributed Computing, International Conference on &
Self-Assembling Wireless Networks, International Workshop on 0 (2006) 197201
2. Tao, Q., Veldhuis, R.: Biometric authentication for a mobile personal device.
Mobile and Ubiquitous Systems, Annual International Conference on 0 (2006) 13
3. Shabeer, H.A., Suganthi, P.: Mobile phones security using biometrics. Computational Intelligence and Multimedia Applications, International Conference on 4
(2007) 270274
4. Manabe, H., Yamakawa, Y., Sasamoto, T., Sasaki, R.: Security evaluation of biometrics authentications for cellular phones. Intelligent Information Hiding and
Multimedia Signal Processing, International Conference on 0 (2009) 3439
5. Matsuo, K., Okumura, F., Hashimoto, M., Sakazawa, S., Hatori, Y.: Arm swing
identification method with template update for long term stability. In: ICB. (2007)
211221
6. Friederike, A.J., Jain, A.K., Griess, F.D., Connell, S.D., Lansing, E., J, M.: On-line
signature verification. Pattern Recognition 35 (2002)
7. Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken
word recognition. IEEE Transactions on Acoustics, Speech and Signal Processing
26(1) (1978) 4349
8. Durbin, R., Eddy, S., Krogh, A., Mitchison, G.: Biological sequence analysis. 11th
edn. Cambridge University Press (2006)
9. de Santos Sierra, A., Avila, C., Vera, V.: A fuzzy dna-based algorithm for identification and authentication in an iris detection system. In: Security Technology,
2008. ICCST 2008. 42nd Annual IEEE International Carnahan Conference on.
(Oct. 2008) 226232
10. Miller, W.: An introduction to bioinformatics algorithms. neil c. jones and pavel a.
pevzner. Journal of the American Statistical Association 101 (June 2006) 855855
11. Verplaetse, C.: Inertial proprioceptive devices: self-motion-sensing toys and tools.
IBM Syst. J. 35(3-4) (1996) 639650
12. Jain, A., Hong, L., Pankanti, S.: Biometric identification. Commun. ACM 43(2)
(2000) 9098
13. Jain, A.K., Flynn, P., Ross, A.A.: Handbook of Biometrics. Springer-Verlag New
York, Inc., Secaucus, NJ, USA (2007)
14. Ross, A., Jain, A.: Information fusion in biometrics. Pattern Recognition Letters
24(13) (September 2003) 21152125