Sem. Project Progress FINAL (Xumuri)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

ADAMA SCIENCE AND TECHNOLOGY UNIVERSITY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTING

DEPARTEMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING

TITLE: FACE RECOGNITION BASED ID CARD SYSTEM

ADVISOR NAME: JAVID (PHD)

PREPARED BY:

1. FALMIWAQ TEMESGEN………….. A/UR14459/10

2. AMANUEL HABTAMU……………. A/UR14821/10

3. FISEHA KEBEDE……………………… A/UR14170/10

4. ASHANAFI ALEMAYEHU………… A/UR14398/10

5. YOSEF WAKGARI…………………A/UR14383/10

February 2022
Face Recognition Based ID Card System|2022

Approval Sheet

This Project has been submitted for examination with my approval as a university advisor.
_______________________ __________________

Project Advisor Signature

Approval by Board of Examiners

_______________________ _______________________

Chair Person Signature

_______________________ _______________________

Project Advisor Signature

_______________________ _______________________

Project Examiner 1 Signature

_______________________ _______________________

Project Examiner 2 Signature

_______________________ _______________________

Project Examiner 3 Signature

Electronics and Communication Engineering i


Face Recognition Based ID Card System|2022

Declaration
This project is our original work, and has not been presented for a project in this or any
other universities, and all sources of materials that will be used for the project work will
be fully acknowledged.

Name of students Signature Date


Falmiwaq Temesgen ____________ ____________

Amanuel Habtamu ____________ ____________

Ashenafi Alemayew ____________ ____________

Fiseha Kebede ____________ ____________

Yosef Wakgari ____________ ____________

Date of submission: 04/02/2022 E.C

This project has been submitted for examination with our approval as a university
advisor.

Advisor: Signature Date

Dr. Javid ____________ ____________

Electronics and Communication Engineering ii


Face Recognition Based ID Card System|2022

Table of Contents
Declaration ..................................................................................................................................................... i
List of Figure................................................................................................................................................. v
Acknowledgement ....................................................................................................................................... vi
List of Acronyms ........................................................................................................................................ vii
Abstract ...................................................................................................................................................... viii
Chapter One .................................................................................................................................................. 1
1. Introduction ........................................................................................................................................... 1
1.1 Background ................................................................................................................................... 1
1.2 Statement of the problem .............................................................................................................. 2
1.3 Objective ............................................................................................................................................. 3
1.3.1 General Objective ........................................................................................................................ 3
1.3.2 Specific Objectives ...................................................................................................................... 3
1.4 Significance of the project ................................................................................................................. 3
1.5 Scope of the project ............................................................................................................................ 3
1.6 Limitation of the project ..................................................................................................................... 3
Chapter two ................................................................................................................................................... 4
2. Literature Review...................................................................................................................................... 4
Chapter three ................................................................................................................................................. 6
3. System Architecture and Methodology..................................................................................................... 6
3.1 Methodology ....................................................................................................................................... 6
3.2.2 Face Detection ............................................................................................................................. 7
3.2.3 Existing database........................................................................................................................ 15
3.2.4 Image Preprocessing .................................................................................................................. 15
3.2.5 Feature Extraction and Selection ............................................................................................... 17
3.2.6 Training ...................................................................................................................................... 18
3.2.7 Face recognition ......................................................................................................................... 18
3.2.8 Output ........................................................................................................................................ 18
Chapter Four ............................................................................................................................................... 19
4. Result and Discussion ............................................................................................................................. 19
4.1 Graphical User Interface (GUI) ........................................................................................................ 19
4.2 Test result of face recognition based meal card system .................................................................... 21

Electronics and Communication Engineering iii


Face Recognition Based ID Card System|2022

Chapter Five ................................................................................................................................................ 22


4. Conclusion and Recommendation .......................................................................................................... 22
4.1 Conclusion ........................................................................................................................................ 22
4.2 Recommendation .............................................................................................................................. 22
Reference .................................................................................................................................................... 23
Appendix ..................................................................................................................................................... 24
1 Main code................................................................................................................................................. 24
2 code for face detection ........................................................................................................................... 28
3 Code for eigen face detection................................................................................................................... 29

Electronics and Communication Engineering iv


Face Recognition Based ID Card System|2022

List of Figure

Figure 1 Block diagram of face recognition based meal card system ............................................ 7
Figure 2 Viola and Jones extend feature set ................................................................................... 8
Figure 3 Haar-like features ........................................................................................................... 11
Figure 4 Five Haar- like patterns ................................................................................................. 11
Figure 5 Eigenfaces of Falmii,Ashu,Fiseha,Amanuel,Yosef and Gutama ................................... 16
Figure 6 Menu GUI...................................................................................................................... 20
Figure 7 Selected face ................................................................................................................... 20
Figure 8 detected face ................................................................................................................... 20
Figure 9 Recognized Face ............................................................................................................. 21

Electronics and Communication Engineering v


Face Recognition Based ID Card System|2022

Acknowledgement

First of all we would like to express our heart full gratitude for our GOD to help us
during our work throughout all things. We extend our sincere thanks to our advisor Dr.Javid
from Electronic Communication Engineering with the guidance and
facilities for our semester project. We also extend our sincere thanks to all
other faculty members of electrical and Computer engineering Department and our
friends for their support and encouragement. Finally we would like to express our gratitude to
our family for their whole heart full support during our study in university, without their
encouragement and supports, both financial and mental, we would not have gone this far. Thanks
for their tolerance and understanding shown during our project.

Electronics and Communication Engineering vi


Face Recognition Based ID Card System|2022

List of Acronyms
ANN……………………….……………………..ARTFICIAL NEURAL NETWORK

DCT………………………………………..…….DISCRETE COSINE TRANSFORM

GUI…………………………………………………GRAPHICAL USER INTERFACE

ID NO………………………………………………...IDENTIFICATION NUMBERS

MATLAB…...………….…...………………………………..MATRIX LABORATORY

PC...…………………………………...……………………….PERSONAL COMPUTER

PCA..………………………………………..PRINCIPAL COMPENENTT ANALYSIS

RGB….……………………………………………………………RED GREEN BLUE

SVM……….……………………………………..…..SUPPORT VECTOR MACHINE

Electronics and Communication Engineering vii


Face Recognition Based ID Card System|2022

Abstract
Face is one of the easiest ways to distinguish the individual identity of each other. Face
recognition is a personal identification system that uses personal characteristics of a person to
identify the person's identity. In our university, even though the cafeteria system (meal card
based on barcode reading) is digitalized and computerized it is not modern latest technology, it
require manual id card all the time . If the id card is computerized and face recognition based it
prevent unregistered student to enter to cafe as well as it prevent unregistered student inter into
the campus if we install some device which recognize peoples face and detect if they are on data
base of campus on the gate . Also the university doesn‟t expend money to publish id card all the
time so it can minimize cost. The entire aim of the project is to design computerized id card
system using face recognition Eigen faces algorithm by using MATLAB code to recognize and
classify students as authorized and unauthorized. Human face recognition procedure basically
consists of two phases, namely face detection, where this process takes place very rapidly in
humans, except under conditions where the object is located at a short distance away, the next is
recognize a face as individuals. Then check whether the tested image or the person that captured
by webcam is authorized or not. If he/she is recognized the equivalent image from the train
dataset will be displayed, if not it will say the image doesn‟t exist. The platforms that we used for
this process is MATLAB.

Electronics and Communication Engineering viii


Face Recognition Based ID Card System|2022

Chapter One

1. Introduction

Face recognition is a pattern recognition task usually performed on human faces. It is the
task of identifying an already detected object as a known or unknown face. Often the
problem of face recognition is confused with the problem of face detection. It is on the
other hand decide if the "face" is someone known, or unknown, using for this purpose a
database of faces in order to validate this input face. It has become a very interesting
area in recent years mainly due to increasing security demands and its potential
commercial and law enforcement applications. Advancement in computing capabilities
over the years now enables similar recognition automatically.

1.1 Background
Humans are very good at recognizing faces and complex patterns. Even with passage of
time, it does not affect these capabilities and therefore, it would help if computers
become as robust as human in face recognition. Face recognition system can help in many ways;
it can be used for both verification and identification. Today, recognition technology is applied to
a wide variety of problems like passport fraud, human computer interaction and support for law
enforcement. This has motivated researchers to develop computational models to identify faces.
Face recognition is typically used in security systems. Besides that, it is also used in
human computer interaction. In order to develop this project eigenfaces method is used
for training and testing faces. It has received significant attention up to the point that
some face recognition conferences have emerged. Eigenfaces are a set of eigenvectors used in
the computer vision problem of human face recognition. A set of eigenfaces can be generated by
performing a mathematical process called principal component analysis (PCA) on a large set of
images depicting different human faces. The key procedure in PCA is based on Karhumen-Loeve
transformation.

Electronics and Communication Engineering 1


Face Recognition Based ID Card System|2022

If the image elements are considered to be random variables, the image may be seen as a
sample of a stochastic process. The focus of the project is to find the accuracy of
eigenfaces method in face recognition. Eigenfaces approach seemed to be an adequate
method to be used in face recognition due to its simplicity, speed and learning
capability. The scheme is based on an information theory approach that decomposes
face images into a small set of characteristic feature images called eigenfaces, which
may be thought of as the principal components of the initial training set of face images.
Here we tried to replace id card with computerized system by using MATLAB to
recognize students face and classify them authorized or unauthorized. We take Images of
different students by webcam. During student enter to the cafe the image of students
taken by webcam when they show there face to the webcam. The computer takes image
from the webcam as an input and then it will detect face or not if it is face it will detect,
unless the program will terminate. This project is intended to design computerized id
card system using face recognition by using MATLAB code to recognize and classify
students as authorized and unauthorized.

1.2 Statement of the problem


Face recognition has recently received a blooming attention and interest from the scientific
community as well as from the general public. The interest from the general public is mostly due
to demand for useful security systems. In our university, id card system is not digitalized to latest
technology which recognize face with out having manual card any more. The system that our
university using have drawback , it require manual card with barcode all the time. Also a
university expend money to publish id card all the time, and this project will minimize cost. With
this regard, id card system based on face recognition plays an indispensable role in securing
university cafe entrance, computerized the system and saving cost expense for id card every year.
Hence, this project is intended to solve such problems. This work has been done to secure our
university cafe in computerized way by using face recognition instead of id card. Also this
project can be applicable in different security purposes.

Electronics and Communication Engineering 2


Face Recognition Based ID Card System|2022

1.3 Objective
1.3.1 General Objective
The main objective of our project is to implement secure, cost effective modern cafe id card
system by using face recognition with computerized way instead of manual id card system.

1.3.2 Specific Objectives


Specifically our project objective is to:

 Minimize id card printing cost.


 Protect unauthorized person enters into the Café
 Introduce new technologies (digital technology) to our university.

1.4 Significance of the project


We design and implement face recognition based id card system, so that unregistered student
can‟t enter to entrance of café and restricted area , but only allowed to specified students
registered on the system. We can also use it on the gate of the university for entrenace. By doing
this the system will simplified. Also it will reduce the cost for id card. This system can be used
practically for security purpose any entrance gates needs to be secured.

1.5 Scope of the project


 Focus on changing our universities id card system based on face recognition by using
MATLAB.
 Based on software programming and GUI Simulation by using MATLAB
programming to implement face recognition based id card system.
 Three main steps to recognition the faces:
i) Constructing face database of known face image (for training).
ii) Taking unknown face image as input (for testing).
iii) Recognizing result as output.

1.6 Limitation of the project


If the student is recognized once in the cafe and want to go again the system unable to
restrict the student enter to cafe twice and the output can‟t be given to the door to
open/close or alarm system. This causes time delay for entering cafe.

Electronics and Communication Engineering 3


Face Recognition Based ID Card System|2022

Chapter two

2. Literature Review

Presently there are several methods for face recognition. The most intuitive way to carry out face
recognition is to look at the major features of the face and compare these to the same features on
other faces. Some of the earliest studies on face recognition were done by Bledsoe[7] was the
first to attempt semi-automated face recognition with a hybrid human-computer system that
classified faces on the basis of fiducially marks entered on photographs by hand. Parameters for
the classification were normalized distances and ratios among points such as eye corners, mouth
corners, nose tip, and chin point. Later work at Bell Labs developed a vector of up to 21 features,
and recognized faces using standard pattern classification techniques.

Face recognition presents a challenging problem in the field of image analysis and computer
vision, and as such has received a great deal of attention over the last few years because of its
many applications in various domains. Face recognition techniques can be broadly divided into
three categories based on the face data acquisition methodology: methods that operate on
intensity images; those that deal with video sequences; and those that require other sensory data
such as 3D information or infra-red imagery [6].

In the language of information theory, the objective is to extract the relevant information in a
face image, encode it as efficiently as possible, and compare one face encoding with a database
of models encoded in the same way [1]. In mathematical terms, the objective is to find the
principal components of the distribution of faces, or the eigenvectors of the covariance matrix of
the set of face images. These eigenvectors can be thought of as a set of features which together
characterize the variation between face images. Each image location contributes more or less to
each eigenvector, so that we can display the eigenvector as a sort of ghostly face called an
eigenface [1].

Fischer and Elschlager[6], attempted to measure similar features automatically. They described a
linear embedding algorithm that used local feature template matching and a global measure of fit
to find and measure facial features. This template matching approach has been continued and

Electronics and Communication Engineering 4


Face Recognition Based ID Card System|2022

improved by the recent work of Yuille and Cohen[4]. Their strategy is based on deformable
templates, which are parameterized models of the face and its features in which the parameter
values are determined by interactions with the face image. Connectionist approaches to face
identification seek to capture the configurationally nature of the task.

Kononen and Lehtio[5] describe an associative network with a simple learning algorithm that
can recognize face images and recall a face image from an incomplete or noisy version input to
the network. Fleming and Cottrell [6] extend these ideas using nonlinear units, training the
system by back propagation

Others have approached automated face recognition by characterizing a face by a set of


geometric parameters and performing pattern recognition based on the parameters. Kanade's[3]
face identification system was the first system in which all steps of the recognition process were
automated, using a top-down control strategy directed by a generic model of expected feature
characteristics. His system calculated a set of facial parameters from a single face image and
used a pattern classification technique to match the face from known set, a purely statistical
approach depending primarily on local histogram analysis and absolute gray-scale values. Recent
work by Burt[1] uses a smart sensing approach based on multi-resolution template matching.
This cause to fine strategy uses a special purpose computer built to calculate multiresolution
pyramid images quickly, and has been demonstrated identifying people in near real time.
Presently there are several methods for face recognition. These are discrete cosine transform
(DCT), Principal Component Analysis (PCA), artificial neural network (ANN) and soon. In our
project we plan to use Eigen faces for detection and recognition of face.

Finally we want to implement computerized id card system by using MATLAB software. This
implementation minimizes the university cost for id card, it simplifies the system and it secures
the cafe more than the manual id card system.

Electronics and Communication Engineering 5


Face Recognition Based ID Card System|2022

Chapter three

3. System Architecture and Methodology


3.1 Methodology
The methodology we will use in this project comprises six major phases. The first two phases
describes the collection and recording information of images respectively. The third phase
describes the taking of new image by webcam to recognize. The last two phases describes the
recognition and display information of image. The major phases and the activities will be
performed in each of them are pictorially explicated in figure below.

Figure 1:Work flow of the project:3.2 System Architecture


The system architecture of the face recognition based id card system can be divided into seven
major modules: Image acquiring, face detection, existing database, image preprocessing, feature
extraction and selection, training and face recognition. These are illustrated in Fig 2.2 with
simple block diagram representation.

Electronics and Communication Engineering 6


Face Recognition Based ID Card System|2022

Figure 1 Block diagram of face recognition based meal card system

3.2.1 Image Acquiring


We get the test image of the student from webcam to recognize whether this student is authorized
or not to enter. This webcam is placed on the entrance of the student cafe so that all students
must show there face directly to the webcam to get authentication.

3.2.2 Face Detection


Face detection is a fundamental task for applications such as face tracking, red-eye removal, face
recognition and face expression recognition. The main function of this step is to determine (1)
whether human faces appear in a given image, and (2) where these faces are located at. The
expected outputs of this step are patches containing each face in the input image. In order to
make further face recognition system more robust and easy to design, face alignment are
performed to justify the scales and orientations of these patches. Besides serving as the
preprocessing for face recognition, face detection could be used for region-of-interest detection,
retargeting, video and image classification, etc.

To build flexible systems which can be executed on mobile products, like handheld PCs and
mobile phones, efficient and robust face algorithms are required. Most of existing face detection

Electronics and Communication Engineering 7


Face Recognition Based ID Card System|2022

algorithms consider a face detection as binary (two class) classification problem. Even though it
looks a simple classification problem, it is very complex to build a good face classifier.
Therefore, learning-based approaches, such as neural network-based methods or supports vector
machine (SVM) methods, have been proposed to find good classifiers. Most of proposed
algorithms use pixel values as features. However, they are very sensitive to illumination
conditions and noises.

Papageorgiou et al. used new feature, it is called Haar-like features. These features encode
differences in average intensities between two rectangular regions, and they are able to extract
texture without depending on absolute intensities. Recently, Viola and Jones proposed an
efficient system for evaluating these features which is called an integral image. And, they also
introduced an efficient scheme for constructing a strong classifier by cascading a small number
of distinctive features using Adaboost. Its result is more robustness and computationally
efficient. Based on Viola and Jones‟ work, many improvements or extensions have been
proposed. Mainly, there are two approaches to enhance their scheme.

The first approach is an enhancement of the boosting algorithms. Boosting is one of the most
important recent developments in classification methodology and, therefore, many variant of
AdaBoost such as Real AdaBoost, LogitBoost, Gentle Adaboost, KLBoosting, etc, have been
proposed. The second approach is an enhancement of used features. Base on original propose of
Haar-like features, (a), Viola and Jones extend feature set as shown in Figure below.

Figure 2 Viola and Jones extend feature set


(b), (c) and (d) in different size are used in to extract features. And, Lienhart et al. introduced an
efficient scheme for calculating 45˚ rotated features. And, Mita and Kaneko introduced a new
scheme which makes Haar-like features be more discriminative. Though Haar-like feature
provides good performance in extracting textures and cascading architecture and integral image
representation make it computationally efficient, it is still not feasible on mobile products.

Electronics and Communication Engineering 8


Face Recognition Based ID Card System|2022

T.Ojalaet el. a new rotation invariant and computationally lighter feature sets. It should be noted
that the basic LBP features have performed very well in various applications, including texture
classification and segmentation, image retrieval and surface inspection. We use viola jones face
detection algorism method to get only the face part of the taken image then give this detected
face to the neural network face recognition algorism. To do this we need to integrate face
detection and recognition algorism.

3.2.2.1 Viola Jones face detection method

Face detection is a part of face identification or a computer technology that determines the
locations and sizes of human in digital image. It detects face and ignores anything else, such as
buildings, trees and bodies. Face detection can be regarded as a more general case of face
localization. In face localization, the task is to find the locations and sizes of a known number of
faces (usually one). In face detection, face is processed and matched bitwise with the underlying
face image in the database. When we see at the person‟s face, can get the information such as the
expression, gender, age and ethnicity. Face detection is useful in many applications such as
surveillance system, human machine interaction, biometrics, gender classification etc. For human
beings face detection is an easy task but face detection is quite a tough task for a computer. In
our project we use viola jones method to detect the face of the student. The face detection will
reduce the computation or the load of the recognition system.

A digital image is made up of finite number of elements each of which has a particular location
and value. These elements are known as pixel and picture element. These elements take
participation to find out the face. Face detection method can be broadly classified into two
categories: Appearance based approach and feature based approach. In the appearance based
approach, the whole image is used as an input to the face detector. In feature based approach face
detection is based on the features extracted from an image. Features can be i.e. skin color or
edges and sometimes they have acknowledged of the face geometry. The appearance based
approach which we used in this project has the potential to identify the face from an image using
viola jones haar cascade classifier.

3.2.2.2 Viola jones algorithm

Electronics and Communication Engineering 9


Face Recognition Based ID Card System|2022

A face detector has to tell whether an image of arbitrary size contains a human face and
if so, where it is. One natural framework for considering this problem is that of binary
classification, in which a classifier is constructed to minimize the misclassification risk.
Since no objective distribution can describe the actual prior probability for a given
image to have a face, the algorithm must minimize both the false negative and false
positive rates in order to achieve an acceptable performance.

This task requires an accurate numerical description of what sets human faces apart from
other objects. It turns out that these characteristics can be extracted with a remarkable
committee learning algorithm called Adaboost, which relies on a committee of weak
classifiers to form a strong one through a voting mechanism. A classier is weak if, in
general, it cannot meet a predefined classification target in error terms. An operational
algorithm must also work with a reasonable computational budget. Techniques such as
integral image and attentional cascade make the Viola-Jones algorithm highly efficient:
fed with a real time image sequence generated from a standard webcam, it performs well
on a standard PC. In this project, MATLAB code is used to implement the Viola-Jones
algorithm. It is originally given by Paula voila and Michael jones.
To study the algorithm in detail, we start with the image features for the classification
task.

I) Features and Integral Image

The Viola-Jones algorithm uses Haar-like features, that is, a scalar product between the
image and some Haar-like templates. More precisely, let I and P denote an image and a
pattern, both of the same size N X N (see Figure 1). The feature associated with pattern
P of image I is defined by

∑1< i<N∑1<j<NI(I,j)1P(I,j) is white-∑1<i<N∑1<j<NI(I,j)1P(I,j) is black………..1

To compensate the effect of different lighting conditions, all the images should be mean
and variance normalized beforehand. Those images with variance lower than one,
having little information of interest in the first place, are left out of consideration.

Electronics and Communication Engineering 10


Face Recognition Based ID Card System|2022

To compensate the effect of different lighting conditions, all the images should be mean
and variance normalized beforehand. Those images with variance lower than one,
having little information of interest in the first place, are left out of consideration.

Figure 3 Haar-like features


.

Here as well as below, the background of a template like (b) is painted gray to highlight
the pattern's support. Only those pixels marked in black or white are used when the
corresponding feature is calculated.

Figure 4 Five Haar- like patterns

The size and position of a pattern's support can vary provided its black and white
rectangles have the same dimension, border each other and keep their relative positions.

Electronics and Communication Engineering 11


Face Recognition Based ID Card System|2022

Thanks to this constraint, the number of features one can draw from an image is
somewhat manageable: a 24X24 image, for instance, has 43200, 27600, 43200, 27600 and 20736
features of category (a), (b), (c), (d) and (e) respectively, hence 162336
features in all.
The derived features are assumed to hold all the information needed to characterize a
face. Since faces are by and large regular by nature, the use of Haar-like patterns seems
justified. There is, however, another crucial element which lets this set of features take
precedence: the integral image which allows calculating them at a very low
computational cost. Instead of summing up all the pixels inside a rectangular window,
this technique mirrors the use of cumulative distribution functions. The integral image II
of I

Is so defined that

The above equation (3) holds for all N1≤N2 and N3≤ N4. As a result, computing an image's
rectangular local sum requires at most four elementary operations given its integral image.
Moreover, obtaining the integral image itself can be done in linear time: setting

N1 = N2 and N3 = N4 ………………………………………..in equation (1)

I(N1,N3) = II(N1,N3) II(N1,N3-1) II(N1-1,N3) + II(N1-1,N31)…………..(4)

II) Feature Selection with Adaboost

How to make sense of these features is the focus about Adaboost terminology. A classifier maps
an observation to a label valued in a finite set. For face detection, it assumes the form of f: ℝd↦
{−1,1}, where 1 means that there is a face and 1 the contrary and d is the number of Haar-like
features extracted from an image. Given the probabilistic weights w,∈ ℝ+ assigned to a training
set made up of n observation-label pairs (xi,yi), Adaboost aims to iteratively drive down an
upper bound of the empirical loss

Electronics and Communication Engineering 12


Face Recognition Based ID Card System|2022

Σwni=1i1yi≠f(xi)……..…………………………..(5)

Remarkably, the decision rule constructed by Adaboost remains reasonably simple so that it is
not prone to over fitting, which means that the empirically learned rule often generalizes well.
Despite its groundbreaking success, it ought to be said that Adaboost does not learn what a face
should look like all by itself because it is humans, rather than the algorithm, who perform the
labeling and the first round of feature selection, as described in the previous section.

The building block of the Viola-Jones face detector is a decision stump, or a depth one decision
tree, parameterized by a feature f ∈{1,…,}a threshold t ∈ ℝ2 R and a toggle
ℸ{−1,1}, . Given an observation x∈ ℝd, a decision stump h predicts its label using the following
rule

h(x) = (1Πfx≥t ) - 1Πfx<t)) ℸ = (1Πfx≥t ) - 1Πfx<t)1ℸ=1 + (1Πfx<t - 1Πfx<t)1……..(6)

where Πf x is the feature vector's f-th coordinate. By adjusting individual example weights
Adaboost makes more effort to learn harder examples and adds more decision stumps in the
process. Intuitively, in the final voting, a stump ht with lower empirical loss is rewarded with a
bigger say (a higher αt) when a Tmember committee (vote-based classifier) assigns an example
according to

FT(i.)= sign [Σatℎ(.)Tt=1] …………………………………….(7)

How the training examples should be weighed? For instance where Adaboost reduces false
positive and false negative rates simultaneously as more and more stumps are added to the
committee. For notational simplicity, we denote the empirical loss by

Σwni=1i(1)1yiΣatht(xi)≤Tt=1 := ℙ(T(X) ≠Y), …………………(8)

where (X; Y) is a random couple distributed according to the probability ℙ defined by the
weights wi(1), 1≤i ≤ n set when the training starts. As the empirical loss goes to zero with T, so
do both false positive ℙ(fT(X) = 1|Y = -1) and false negative rates ℙ(fT(X) = -1|Y = 1) owing to

ℙ(fT(X) =Y ) = ℙ(Y=1)ℙ(fT(X) = -1|Y = 1) + ℙ(Y=−1)ℙ(fT(X) = 1|Y = -1) …(9)

Thus the detection rate must tend to 1.

Electronics and Communication Engineering 13


Face Recognition Based ID Card System|2022

ℙ(fT(X) = 1|Y = 1) = 1 - ℙ(fT(X) = -1|Y = 1) ………….……………………..(10)

Thus the size T of the trained committee depends on the targeted false positive and false negative
rates. In addition, let us mention that, given n negative and n+ positive examples in a training
pool, it is customary to give a negative (resp. positive) example an initial weight equal to 0:5=n
(resp. 0:5=n+) so that Adaboost does not favor either category at the beginning.

III) Attentional Cascade

In theory, Adaboost can produce a single committee of decision stumps that generalizes well.
However, to achieve that, an enormous negative training set is needed at the outset to gather all
possible negative patterns. In addition, a single committee implies that all the windows inside an
image have to go through the same lengthy decision process. There has to be another more cost-
efficient way. The prior probability for a face to appear in an image bears little relevance to the
presented classifier construction because it requires both the empirical false negative and false
positive rate to approach zero. However, our own experience tells us that in an image, a rather
limited number of sub-windows deserve more attention than others. This is true even for face-
intensive group photos. Hence the idea of a multi-layer attentional cascade which embodies a
principle akin to that of Shannon coding: the algorithm should deploy more resources to work on
those windows more likely to contain a face while spending as little effort as possible on the rest.
Each layer in the attentional cascade is expected to meet a training target expressed in false
positive and false negative rates: among n negative examples declared positive by all of its

preceding layers, layer l ought to recognize at least n as negative and meanwhile try not to
sacrifice its performance on the positives: the detection rate should be maintained above 1-βl.

It should be kept in mind that Adaboost by itself does not favor either error rate: it aims to reduce
both simultaneously rather than one at the expense of the other.

IV) Dataset and Experiments

The data sets are face and non-face data that was load to the MATLAB to training and have high
efficiency to detect human face. The actual cascade training carried out on our personal laptop
We first downloaded many different images without human faces from the internet which used

Electronics and Communication Engineering 14


Face Recognition Based ID Card System|2022

for training. For the validation we take the image live (real time) by the webcam. The training
process lasted ones the code is runes after that we can detect any human face at any time.

3.2.3 Existing database

This database has an images of the different students face. This database of images used to train
the system to recognize students face by using Eigenfaces. In our system this database stores all
the train image dataset.

3.2.4 Image Preprocessing

The input image from the camera must be preprocessed. In our project first to minimize the other
effect we need to detect human face only therefore we detect the face only. If not the recognition
system doesn‟t proceed. After the detection part we get 320x240 pix image but the all training
datasets are 250x250 pix images. Therefore, we change the 320x240 to 250x250 pix image. The
other image preprocessing is we must change the RGB image to gray scale image because
without changing this we can‟t save the image in the MATLAB working space or .mat format.

Eigenfaces

As a general view, this algorithm extracts the relevant information of an image and encodes it as
efficiently as possible. For this purpose, a collection of images from the same person is evaluated
in order to obtain the variation. Mathematically, the algorithm calculates the eigenvectors of the
covariance matrix of the set of face images. Each image from the set contribute to an
eigenvector, these vectors characterize the variations between the images. When we represent
these eigenvectors, we call it eigenfaces. Every face can be represented as a linear combination
of the eigenfaces; however, we can reduce the number of eigenfaces to the ones with greater
values, so we can make it more efficient.igenfaces

The basic idea of the algorithm is develop a system that can compare not images themselves, but
these feature weights explained before. The algorithm can be reduced to the next simple steps. 1.
Acquire a database of face images, calculate the eigenf space with all them. It will be necessary
for further recognitions. 2. When a new image is found, calculate its set of weights. 3. Determine

Electronics and Communication Engineering 15


Face Recognition Based ID Card System|2022

if the image is a face; to do so, we have to see of it is close enough to the face space. 4. Finally, it
will be determined if the image corresponds to a known face of the database of not.

Figure 5 Eigenfaces of Falmii,Ashu,Fiseha,Amanuel,Yosef and Gutama


The transformation of a face from image space (I) to face space (f) involves just a matrix
multiplication. If the average face image is A and U contains the (previously calculated) Eigen
faces,
f = U * (I - A)

This is done to all the face images in the face database (database with known faces) and to the
image (face of the subject) which must be recognized. The possible results when projecting a
face into face space are four possibilities

1. Projected image is a face and is transformed near a face in the face database.

2. Projected image is a face and is not transformed near a face in the face database.

3.Projected image is not a face and is transformed near a face in the face database.

4. Projected image is not a face and is not transformed near a face in the face database.

Electronics and Communication Engineering 16


Face Recognition Based ID Card System|2022

Since PCA is a many-to-one transform, several vectors in the image space (images) will map to a
point in face space (the problem is that even non-face images may transform near a known face
image's faces space vector). The Eigenvectors with larger eigenvalues convey information
relative to the basic shape and structure of the faces. It‟s vectors with smaller eigenvalues tend to
capture information that is specific to single or small subsets of learned faces and are useful for
distinguishing a particular face from any other face.

3.2.5 Feature Extraction and Selection

There are different techniques to extract features and compress image but in our system we use
principal components analysis (PCA).

Principal Components Analysis (PCA)

The use of eigenfaces is commonly called as Principal Component Analysis (PCA).With PCA,
the images must be used of the same size and they are normalized to line-up the eyes and mouth
of the subjects within the images. The dimension of the data using data compression basics is
reduced using PCA and that reveals the most effective low dimensional structure of facial
patterns. Using this low dimension structure it precisely decomposes the face structure into
orthogonal and uncorrelated components known as eigenfaces. Using this technique the face
image can be represented as a weighted sum or feature vector of the eigenfaces which can be
stored in a 1-D array. To avoid the poor performance of the result image, the PCA approach
requires the full frontal face to be presented each time. This technique reduces the required data
to identify the individual to 1/1000th of the presented data. The main principle of PCA is derived
from the information theory approach, which breaks down facial images into small sets of
feature images called Eigenfaces. it in turn are known as principal component analysis of
original training set of face images. Face images are deconstructed by extracting relevant
information. One of the many methods to capture variations from a collection training face
images and use this information to decode and compare individuals

Electronics and Communication Engineering 17


Face Recognition Based ID Card System|2022

3.2.6 Training

Our system needs to train to recognize authorized and unauthorized students face in different
pose and facial expressions therefore we need some system to train it. In this project we use
Eigenfaces to train the system.

3.2.7 Face recognition

Face recognition system passes through three main phases during a face recognition process. In
Those phases are: Face Library Formation Phase, Training Phase, and Recognition and Learning
Phase. In face library formation phase ,the acquisition and the preprocessing of the face images
that are going to be added to the face library are performed. Face images are stored in a face
library in the system. The database is initially empty. In order to start the face recognition
process, this initially empty face database has to be filled with face image. In order to perform
image size conversions and enhancements on face images, there exists the "pre-processing"
module. This module automatically converts every face image from 320x240 to 250 x 250.
Training Phase, after adding face images to the initially empty face library, the system is ready to
perform training set and eigenface formations. Those face images that are going to be in the
training set are chosen from the entire face library. Recognition and Learning Phase After
choosing a training set and constructing the weight vectors of face library members, now the
system is ready to perform the recognition process.

3.2.8 Output
After an Eigenfaces finishes the recognition part, then the algorithm will proceed to the output.
This output will be authorized or unauthorized.

Electronics and Communication Engineering 18


Face Recognition Based ID Card System|2022

Chapter Four

4. Result and Discussion

4.1 Graphical User Interface (GUI)

The Graphical User Interface was constructed using MATLAB GUIDE or Graphical User
Interface Design Environment. Using the layout tools provided by GUIDE, we designed the
following graphical user interface face recognition based meal card system menu.

Electronics and Communication Engineering 19


Face Recognition Based ID Card System|2022

Figure 6 Menu GUI

Figure 7 Selected face

Figure 8 detected face

Electronics and Communication Engineering 20


Face Recognition Based ID Card System|2022

Figure 9 Recognized Face

4.2 Test result of face recognition based meal card system

First run the bdrfacerec MATLAB file at that time Menu GUI on fig 3.1 will come. After that
choose choice number1 to select the image that captured by Webcam as shown on fig 3.2. Then,
the face is detected as displayed on fig 3.3. Choose choice number 2 to record the students
information such as Name, Surname and Phone Number. By choosing choose number 3 you can
know ID number of the recorded student. Then choose choice number 4 to recognize the face as
shown on fig 3.4. After that choose choice „Q‟ to Quit and choice „R‟ to Reset the program.

Finally test whether the tested image or the person that captured by webcam is authorized or not.
If he/she is recognized the information stored about that student will displayed if not it will say
the corresponding face does not exist at your database.

Electronics and Communication Engineering 21


Face Recognition Based ID Card System|2022

Chapter Five

4. Conclusion and Recommendation

4.1 Conclusion

Face recognition has become a very interesting area in recent years mainly due to increasing
security demands and its potential commercial and law enforcement applications. In our
university, meal card system is not digitalized or computerized. Manual id card system is less
accurate when we compare with digital system. Any student who have manual id card can enter
to the cafe even if he/she is unauthorized or doesn‟t recognize because the id card can be forged .
With this regard, this project presents face recognition based id card system. We use MATLAB
software and Eigenfaces for train and test the system. As we discussed above our system have
system architecture. This system uses face recognition for computerized id card system. We first
collect sample image dataset of our own image and train the system by using Eigenfaces. We
capture the image of the student by webcam at the get of the cafe to test whether this student is
authorized or not to enter into the cafe. Then the test image is processed through the MATLAB
code we wrote then it will display if that student can enter into the cafe or not. This system
solves the security problem that was faced by the university right now by the existing manual id
card system. It also reduces the cost that the university expends for id card print.

4.2 Recommendation
In this project we do partially full fill face recognition based id card system. There are many
works that can improve the system quality and accuracy. We strongly recommend that this
system should operate in real time by detecting and recognize the student face then after
recognition the output can be given to the door open/closed system or alarm system. This will
solve the time delay at the cafe gate. Also when one student is recognized ones means he/she
enters to the student cafe, So that he/she must restricted from entering into the cafe again.
Therefore there must be a counter to control the students from entering to the cafe again and
again. One student can‟t enter twice and more for one meal time. And also we highly recommend
for future works to add fingerprint systems for twin identification on this system.

Electronics and Communication Engineering 22


Face Recognition Based ID Card System|2022

Reference
[1].Russell.S and Norvig.P“Artificial Intelligence: A Modern Approach”, Second
Edition.Prentice Hall. 2003

[2]. Bouattour, H., Fogelman Soulie, F., and Viennet, E., “Neural Nets for Human Face
Recognition”, International Joint Conference on Neural Nets for Human Face
Recognition Volume 3, 7-11 June 1992 Page(s):700 - 704 vol.3 Digital Object Identifier
10.1109/IJCNN.1992.227070.

[3]. Moghaddam, B., Nastar, C., and Pentland, A., “A Bayesian Similarity Measure for
Deformable Image Matching”, Image and Vision Computing. Vol. 19, Issue 5, May 2001, pp.
235-244.

[4]. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: a literature


survey,” Tech-nical Report CAR-TR-948, Center for Automation Research, University of
Maryland (2002).

[5]. A. K. Jain, R. P. W. Duin, and J. C. Mao, “Statistical pattern recognition: a review,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 4–37, 2000.

[6]. M. H. Yang, D. J. Kriegman, and N. Ahuja, “Detecting face in images: a survey,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 34–58, 2002

[7]. G. Yang and T. S. Huang, “Human face detection in complex background,” Pattern
Recognition Let-ter, vol. 27, no. 1

[8] T. K. Leung, M. C. Burl, and P. Perona, “Finding faces in cluttered scenes using random
labeled graph matching,” Proc. Fifth IEEE Int‟l Conf. Computer Vision, pp. 637-644, 1995.

[9] E. Saber and A.M. Tekalp, “Frontal-view face detection and facial feature extraction using
color, shape and symmetry based cost functions,” Pattern Recognition Letters, vol. 17, no. 8, pp.
669-680, 1998.

Electronics and Communication Engineering 23


Face Recognition Based ID Card System|2022

Appendix

1 Main code
bdrfacerec.m

disp(' <<MAIN MENU>> ')

disp(' ')

disp('FACERECOGNITION BASED MEAL CARD SYSTEM SEMESTER


PROJECT')

disp(' ')

disp('.........Select Image....................[1]')

disp('.........New Record......................[2]')

disp('.........Number of ID(s).................[3]')

disp('.........Face Recognition................[4]')

disp('.........Delete Database.................[5]')

disp('.........ID Information..................[6]')

disp('.........Mean Face and EigenFaces........[7]')

disp('********************************************')

disp('Quit.....................................[Q]')
disp('Reset....................................[R]')
disp(' ')

y=input('Your Choice--> ','s');

switch(y)

case '1'

detectFace

Electronics and Communication Engineering 24


Face Recognition Based ID Card System|2022

if isequal(ans,0);

disp('ACTION CANCELLED');

h = waitbar(0,'PLEASE WAIT...');

for i=1:2450,

waitbar(i/100)
end

close(h)
clear all
close all
bdrfacerec
return
end

try

img=imread([pathname ans]);

if size(img,3)~1;

img=rgb2gray(img);

img=imresize(img,[250 250]);

figure,imshow(img)

bdrfacerec

return

end

if size(img,3)~0;

disp(' PLEASE SELECT A COLOR IMAGE !!!')


bdrfacerec

Electronics and Communication Engineering 25


Face Recognition Based ID Card System|2022

return
end
catch
clc

disp('INCORRECT FILE FORMAT')

disp(' ')

disp('Press any key to continue...')

pause

bdrfacerec
return
end
case '2'
checkdata
case'3'
datainfo
case'4'
facerec
case'5'
deldata
case'Q'
clc
clear all

close all

warndlg('THANKS FOR USING ASTU CAFE',' ');

case'q'

clc

clear all

close all

Electronics and Communication Engineering 26


Face Recognition Based ID Card System|2022

warndlg('THANKS FOR USING ASTU CAFE',' ');

case 'r'

clc

clear all

close all

bdrfacerec

case 'R'

clc

clear all

closeall

bdrfacerec

case '6'

clc

ginfo

case '7'

eigen

otherwise

clc

disp('Wrong SELECTION!!')

disp(' ')

disp('Press any key to continue...')

pause

Electronics and Communication Engineering 27


Face Recognition Based ID Card System|2022

bdrfacerec

return

end

2 code for face detection


detectface.m

FDetect = vision.CascadeObjectDetector;

vid=videoinput('winvideo',1,'YUY2_320x240');

set(vid,'ReturnedColorSpace','rgb');

preview(vid)

pause (2)

start(vid);

im=getdata(vid,1);

figure(4),imshow(im)

title ('captured image');

closepreview(vid)

is = imresize(im, [250 250]);

imwrite(im,'6.jpg');

closepreview(vid)

I = imread('6.jpg');

BB = step(FDetect,I);

if numel(BB)==0;

error('Nothing was detected, try again');

Electronics and Communication Engineering 28


Face Recognition Based ID Card System|2022

closeall

clc

clear all

else

figure,imshow(I); hold on

for i=1

rectangle('Position',BB(i,:),'LineWidth',2,'LineStyle','-','EdgeColor','r');

end

title('Face Detection');

J = imcrop(I,([BB(1) BB(2) BB(3) BB(4)]));

figure(4), imshow (J);

is = imresize(J, [250 250]);

imwrite (is,'C:\mathlab\work\face123\1.jpg');[ans,pathname]=uigetfile( ...{'*.jpg';'*.jpeg'...}, ...

'Select an IMAGE');

return
End

3 Code for eigen face detection


eigen.m
if(exist('fdata.dat')==2)
try

load('fdata.dat','-mat');

matrice=zeros(size(data{1,1},1),fnumber);

Electronics and Communication Engineering 29


Face Recognition Based ID Card System|2022

for ii=1:fnumber

matrice(:,ii)=double(data{ii,1});

imsize=[250 250];

nPixels = imsize(1)*imsize(2);

matrice2=double(matrice)/255;

avrgx = mean(matrice2')';

for i=1:fnumber

matrice2(:,i) = matrice2(:,i) - avrgx;

end

imshow(reshape(avrgx, imsize)); title('mean face')

end

cov_mat = matrice2'*matrice2;

[V,D] = eig(cov_mat);

V = matrice2*V*(abs(D))^-0.5;

for ii=1:fnumber

figure,imshow(ScaleImage(reshape(V(:,ii),imsize)));

end

bdrfacerec

catch

disp('Mean face and eigenfaces cannot be shown!!!')

disp('Possible Reasons:')

Electronics and Communication Engineering 30


Face Recognition Based ID Card System|2022

disp(' ')

disp('1--> Check the size of the new image and stored image(s) if you change the

imresize line at bdrfacerec.m')

disp('2--> Database is empty')

disp('3--> There is only one person in your database. Please add atleast one person more

to see the average of

faces')

pause
bdrfacerec
end
else
clc

disp(' CORRESPONDING FACE DATABASE NOT FOUND !!!')

disp(' ')

disp(' Press any key to continue ')

pause

bdrfacerec
end

Electronics and Communication Engineering 31

You might also like