This document describes a face recognition method that uses both wavelet transforms and principal component analysis (PCA). It first applies wavelet transforms to face images to extract discriminative features and reduce variations from pose, lighting etc. It then uses PCA on the wavelet-transformed images to further reduce dimensionality and extract low-dimensional discriminative feature vectors. These feature vectors are then used for classification, with nearest neighbor used as the classifier. The method is tested on the ORL face database of 400 images of 10 subjects with variations in illumination and facial expressions. It aims to improve robustness over using only PCA for face recognition.
This document describes a face recognition method that uses both wavelet transforms and principal component analysis (PCA). It first applies wavelet transforms to face images to extract discriminative features and reduce variations from pose, lighting etc. It then uses PCA on the wavelet-transformed images to further reduce dimensionality and extract low-dimensional discriminative feature vectors. These feature vectors are then used for classification, with nearest neighbor used as the classifier. The method is tested on the ORL face database of 400 images of 10 subjects with variations in illumination and facial expressions. It aims to improve robustness over using only PCA for face recognition.
This document describes a face recognition method that uses both wavelet transforms and principal component analysis (PCA). It first applies wavelet transforms to face images to extract discriminative features and reduce variations from pose, lighting etc. It then uses PCA on the wavelet-transformed images to further reduce dimensionality and extract low-dimensional discriminative feature vectors. These feature vectors are then used for classification, with nearest neighbor used as the classifier. The method is tested on the ORL face database of 400 images of 10 subjects with variations in illumination and facial expressions. It aims to improve robustness over using only PCA for face recognition.
This document describes a face recognition method that uses both wavelet transforms and principal component analysis (PCA). It first applies wavelet transforms to face images to extract discriminative features and reduce variations from pose, lighting etc. It then uses PCA on the wavelet-transformed images to further reduce dimensionality and extract low-dimensional discriminative feature vectors. These feature vectors are then used for classification, with nearest neighbor used as the classifier. The method is tested on the ORL face database of 400 images of 10 subjects with variations in illumination and facial expressions. It aims to improve robustness over using only PCA for face recognition.
Sharma| Face Recognition using PCA and Wavelet Method
International Journal of Graphics & Image Processing |Vol 3|issue 1|Feb. 2013 31
Face Recognition using PCA and Wavelet Method Maske Rupali Amrutrao, Pawan R. Sharma Asst.Professor, SRESCOE Kopergaon.Dist A..Nagar.MH INDIA MTech-IIyear, RJPV University .Bhopal INDIA
ABSTRACT This paper proposes a new method of face recognition which is used for face recognition by Wavelet with PCA method. For face recognition we have used a two-step method, first Wavelet is used to transform the faces to a more discriminated space and then principal component analysis (PCA) is applied. The proposed method produced a significant improvement which includes a substantial reduction in error rate and in time of processing during the obtaining PCA orthonormal basis i.e. related poses and variations. In this proposed system, a methodology is given for improving the robustness of a face recognition system based on two well-known statistical modeling methods to represent a face image: Principal Component extract the discriminates features from the face. Preprocessing of human face image is done using Gabor wavelets which eliminates the variations due to pose, lighting and features to some extent. PCA extract low dimensional and discriminating feature vectors and these feature vectors were used for classification. The classification stage uses nearest neighbour as classifier. This proposed system will use the ORL face data base with 100 frontal images corresponding to10 different subjects of variable illumination and facial expressions. Keywords Face recognition, Gabor Wavelet transform, Principal Component Analysis. 1. INTRODUCTION The face recognition problem can be formulated as follows: Given an input face image and a database of face images of known individuals, how can we verify or determine the identity of the person in the input image? Biometric-based techniques have emerged as the most promising option for recognizing individuals in recent years since, instead of authenticating people and granting them access to physical and virtual domains based on passwords, PINs, smart cards, plastic cards, tokens, keys and so forth, these methods examine an individuals physiological and/or behavioral characteristics in order to determine and/or ascertain his identity. Passwords and Face recognition has been a very popular research topic in recent years. The first attempts began in the 1960's with a semi-automated system. It used features such as eyes, ears, noses, and mouths. Then distances and ratios were computed from these marks to a common reference point and compared to reference data. In the early 1970's Goldstein, Harmon and Lesk created a system of 21 subjective markers such as hair color and lip thickness. Later Kohonen demonstrated that a simple neural net could perform face recognition for aligned and normalized face images. The type of network he employed computed a face description by approximating the eigenvectors of the face image's autocorrelation matrix; these eigenvectors are now known as 'Eigenfaces'. Kohonen's system was not a practical success, however, because of the need for precise alignment and normalization [1]. Turk and Pentland (1991) then demonstrated that when we perform the coding using the eigenfaces the residual error could be used both to detect faces in cluttered natural imagery, and to determine the precise location and scale of faces in an image.
Figure. 1 Block Diagram Of Face Recognition An overview of the face recognition process is illustrated in Fig. 1. In the figure the gallery is the set of known individuals. The images used to test the algorithms are called probes. A probe is either a new IJGIP Journal homepage: www.ifrsa.org Maske Rupali Amrutrao, Pawan R. Sharma| Face Recognition using PCA and Wavelet Method International Journal of Graphics & Image Processing |Vol 3|issue 1|Feb. 2013 32 image of individual in the gallery or an image of an individual not in the gallery. To Compute performance, one needs both a gallery and probe set. The probes are presented to an algorithm, and the algorithm returns the best match between the each probe and images in the gallery. The estimated identity of a probe is the best match. While there are many databases in use currently, the choice of an appropriate database to be used should be made based on the task given [2]. Some face data sets that are commonly used: Color FERET Database, Yale Face Database, PIE Database, FIA video Database, CBCL Face recognition Database, Expression Image Database, Mugshot Identification Database, Indian Face Database, Face Recognition Data, University of Essex, UK 2. RELEATED WORK Face Recognition is the process of recognizing a person on the basis of face alone. It is a well known fact that each person has unique face that enables us to recognize them. It has been proved that face recognition is the most successful biometric method .unlike other forms of identification such as password or keys, a persons face cant be stolen, forgotten or lost. Thus Face recognition provides the higher authentication. [10] Face recognition scenarios can be classified into two types, (i) face verification (or authentication) and (ii) face identification (or recognition). Face verification (Am I who I say I am?) is a one-to-one match that compares a query
Figure :2. Face recognition system classifications. Face recognition is one of the few biometric methods that possess the merits of both high accuracy and low intrusiveness. It has the accuracy of a physiological approach without being intrusive. For this reason, since the early 70's (Kelly, 1970), face recognition has drawn the attention of researchers in fields from security, psychology, and image processing, to computer vision. Numerous algorithms have been proposed for face recognition; for detailed survey please see Chellappa (1995) and Zhang (1997). [18] 3. PCA PCA commonly referred to as the use of eigenfaces, is the technique pioneered by Kirby and Sirivich in 1988. With PCA, the probe and gallery images must be the same size and must first be normalized to line up the eyes and mouth of the subjects within the images. The PCA approach is then used to reduce the dimension of the data by means of data compression basics reveals the most effective low dimensional structure of facial patterns. This reduction in dimensions removes information that is not useful and precisely decomposes the face structure into orthogonal (uncorrelated) components known as eigenfaces. Each face image may be represented as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1D array. A probe image is compared against a gallery image by measuring the distance between their respective feature vectors. The PCA approach typically requires the full frontal face to be presented each time; otherwise the image results in poor performance. The primary advantage of this technique is that it can reduce the data needed to identify the individual to 1/1000 the of the data presented. The Eigenface algorithm uses the Principal Component Analysis (PCA) for dimensionality reduction to find the vectors which best account for the distribution of face images within the entire image space [14]. These vectors define the subspace of face images and the subspace is called face space. All faces in the training set are projected onto the face space to find a set of weights that describes the contribution of each vector in the face space. To identify a test image, it requires the projection of the test image onto the face space to obtain the corresponding set of weights. By comparing the weights of the test image with the set of weights of the faces in the training set, the face in the test image can be identified. The key procedure in PCA is based on Karhumen-Loeve transformation [18]. If the image elements are considered to be random variables, the image may be seen as a sample of a stochastic process. The Principal Component Analysis basis vectors are defined as the eigenvectors of the scatter matrix ST ,
A: Mathematics of PCA A 2-D facial image can be represented as I-D vector by concatenating each row (or column) into a single column (or row) vector. 1) We assume the training sets of images are rl, rl, r3, ... , r m, with each image I(x,y). where (x,y) is the size of the image. Convert each image into set of vectors and new full-size matrix (mxp) , where m is the number of training images and p is x xy the size of the image. 2) Find the mean face by: 'P _ 1 m - ; L..i=l ri Maske Rupali Amrutrao, Pawan R. Sharma| Face Recognition using PCA and Wavelet Method International Journal of Graphics & Image Processing |Vol 3|issue 1|Feb. 2013 33 3) Calculated the mean-subtracted face: <1>, = r, - 'P i = 1,2,3 .... m. and a set of matrix is obtained with A = [<1>1, <1>2, ...... <l>m,] is the mean-subtracted matrix vector with its size Amp. 4) By implementing the matrix transformations, the vector matrix is reduced by: Cmm = Amp X ATpm (9) where C is the covariance matrix and T is transpose matrix. 5) Find the eigen vectors V mm and eigen values Am from the C matrix and ordered the eigen vectors by highest eigen values. 6) Apply the eigen vector matrix, V mm and adjusted matrix <Dm. These vectors determine the linear combinations of the training set images to form the eigenfaces, 7) Instead of using m eigen faces, m' m which is considered as the image provided for training for each individual or m' is the total class used for training. 8) Based on the eigen faces, each image has its face vector by Wk = u\(r - '1'), k = 1,2, .... , m'. 9) The weights form a feature vector. This feature vectors are taken as the representational basis for the face images with dimension reduced with m. 10) The reduced data is taken as the input to the next stage for discriminating feature. Figure 3: Eigenvectors corresponding to the 7 largest eigenvalues 4. GABOR WAVELET Gabor filters were introduced in image processing because of their biological relevance and computational properties.A Gabor atom (or function) was proposed by Hungarian-born electrical engineer Dennis Gabor in 1946.
The band LL is a coarser approximation to the original image. The bands LH and HL record the changes of the image along horizontal and vertical directions. While the HH band shows the high frequency component of the image. This is the first level decomposition. Further decomposition can be conducted on the LL subband. Daub(4), Daub(6) , Daub(8) they does not cause significant difference by applying in face recognition. Therefore wavelet Daubechies wavelet D4 is adopted for image decomposition in our system. 5. RESULT In this system is tested with ORL face database and its effectiveness is shown in results. For different scales and orientations of the Gabor filter, the input image is convoluted with gabor filters. Then this convoluted image feature vectors were formulated using PCA method. PCA, is used to reduce the high dimensionality of these feature vectors. The recognition rate high with features extracted from PCA based Gabor methods than simple PCA methods.The extracted features is used as it is using the above said feature extraction methods and classification is done using eigenvalues and compared with Euclidean distance measure method.The recognition rate is obtained with fixed number of PCA features. The performance comparison with Euclidean Distance Measure Classifier (ED) is shown
REFERENCES
[1] Juwei Lu, Kostantinos N. Plataniotis,and Anastasios N. Venetsanopoulos,Face recognition using LDA Based algorithms, IEEE Transactions On Neural Networks, vol. 14, no. 1, January 2003. [2] Volker Blanz & Thomas Vetter, A Morphable Model for The Synthesis Of 3D Faces. Maske Rupali Amrutrao, Pawan R. Sharma| Face Recognition using PCA and Wavelet Method International Journal of Graphics & Image Processing |Vol 3|issue 1|Feb. 2013 34 [3] Xiao-Ming,Bai,Bao-Cai Yin, Qin Shi, Yan- Feng Sun, Face Recognition using Extended Fisherface with 3D Morphable Model, Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005. [4] Xiaoguang Lu, Image Analysis for Face Recognition, Dept. of Computer Science & Engineering,Michigan State University, East Lansing. [5] Sanun Srisuk,Kongnat Ratanarangsank, Werasak Kurutach and Sahatsawat Waraklang, Face Recognition using a New Texture Representation of Face Images. [6] Spector, A. Z. 1989. Achieving application requirements. In Distributed Systems, S. Mullende.