1822 B.tech It Batchno 347
1822 B.tech It Batchno 347
1822 B.tech It Batchno 347
PYTHON GUI
INFORMATION TECHNOLOGY
By
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A” by NAAC
JEPPIAAR NAGAR, RAJIV GANDHI SALAI, CHENNAI - 600 119
MAY 2022
DEPARTMENT OF INFORMATION TEHNOLOGY
BONAFIDE CERTIFICATE
This is to certify that this Project Report is the Bonafide workofSANJUDHA M(Reg
no:38120074) and MOKITHA B (Reg no:38120051) carried out the projectentitled
“FACE RECOGNITION BASED ATTENDANCE SYSTEM USING PYTHON
GUI”under our supervision fromOCTOBER 2021 to APRIL 2022
Internal Guide
Dr.JEBERSON RETNARAJ,M.E., Ph.D.,
InternalExaminer ExternalExaminer
DECLARATION
DATE:
PLACE:CHENNAI SIGNATUREOFTHECANDIDATE
ACKNOWLEDGEMENT
I would like to express our sincere and deep sense of gratitude to our Project
GuideDr. R. JEBERSON RATNA RAJ M.E., Ph.D.,
forhervaluableguidance,suggestionsandconstantencouragementpavedwayforthe
successfulcompletionofour projectwork.
Face recognition systems are part of facial image processing applications and
theirsignificanceasaresearchareaareincreasingrecently.Theyusebiometricinformationo
fthehumansandareapplicableeasily insteadoffingerprint,iris,signature etc., because
these types of biometrics are not much suitable for non-collaborative people. Face
recognition systems are usually applied and preferred forpeople andsecurity
camerasinmetropolitanlife.Thesesystemscanbe
usedforcrimeprevention,videosurveillance,personverification,andsimilarsecurityactiviti
es. We describe a face recognition-based automated attendance system utilizing a
Python GUI in this work. This technique has a lot of applications in everyday life,
notably in school and college. Scaling of the image size is conducted at the first
phase, or pre-processing stage, to avoid or minimize information loss. HAAR
CASCADE and XGBOOST are the algorithms involved. Overall, we created a Python
programme that accepts an image from a database, performs all necessary
conversions for picture identification, then confirms the image in video or real time
using a user-friendly interface by accessing the camera. The name and time of the
successful match is then recorded.Face detection and recognition are two of the most
demanding computer vision applications and outcomes. This area has traditionally
been a prominent focus of image analysis research because to its function as the
primary identification strategy for human faces. It's both exciting, as well as
biometrics, pattern recognition Also, teaching a machine to accomplish this is
challenging. Face recognition is another one of the toughest problems in computer
vision. Recognizing and detecting faces and computer vision, are all hot issues in the
medical and research industries.There are several software programmes or
technologies that have improved to the point where even blurry images may be
reconstructed and analysed to understand more about a person's personality. Facial
recognition technology is a framework or programme that analyses an image or video
footage to recognise people's faces and authenticate their identification. The face is a
one-of-a-kind reflection of a person's personality. Face recognition is a biometric
approach that involves matching a real-time image with previously stored
photographs of the same individual in a database to identify a person
ABSTRACT 5
LIST OFFIGURES 5
LISTOFABBREVATION 9
1 INTRODUCTION 10
1.1GENERAL 10
1.2 STATEMENT OF THE PROBLEM 11
1.3 SCOPE AND OUTLINE OF THESIS 11
2 LITERATURE SURVEY 12
3
DESIGN OF A FACE RECOGNITION SYSTEM
3.1 INPUT PART 17
3.2 FACE DETECTION PART 17
3.3 FACE RECOGNITION PART 24
3.4 OUTPUT PART 27
3.5 CHAPTER SUMMARY AND DISCUSSION 27
5.1 DISCUSSION 44
5.2CONCLUSION 45
6 FUTURE WORKS 46
7 REFERENCE 52
8 SOURCE CODE 54
LISTOFFIGURES
ACRONYM ABBREVIATION
RGB Red-Green-Blue
YCbCr Luminance-blue Difference Chroma-red Difference
Chroma
HSV Hue-Saturation-Value
YUV Luminance-Blue Luminance Difference-Red
Luminance Difference
Feed Forward Neural Network
FFNN
Self Organizing Map
SOM
Neural Network
NN
Principal Component Analysis
PCA
Support Vector Machines
SVM
Back Propagation
BP
Discrete Cosine Transform
DCT
Radial Basis Neural Network
RBNN
Linear Discriminant Analysis
LDA
Independent Component Analysis
ICA
Hidden Markov Model
HMM
Pan/Tilt/Zoom
PTZ
Laplacian of Gaussian
LoG
CHAPTER 1
INTRODUCTION
1.1 GENERAL
Facerecognitionsystemisacompleximage-
processingprobleminrealworldapplications with complex effects of illumination,
occlusion, and imaging
conditionontheliveimages.Itisacombinationoffacedetectionandrecognitiontechnique
sin image analyzes. Detection application is used to find position of the faces in
agiven image. Recognition algorithm is used to classify given images with
knownstructured properties, which are used commonly in most of the computer
visionapplications.Theseimageshavesomeknownpropertieslike;sameresolution,incl
uding same facial feature components, and similar eye alignment. These
imageswill be refered as “standard image” in the further sections. Recognition
applicationsuses standard images, and detection algorithms detect the faces and
extract faceimages which include eyes, eyebrows, nose, and mouth. That makes
the algorithmmore complicated than single detection or recognition algorithm. The
first step forface recognition system is to acquire an image from a camera. Second
step is facedetection from theacquired image.As a third step, face recognition
thattakes theface images from output of detection part. Final step is person identity
as a result ofrecognition part. An illustration of the steps for the face recognition
system is giveninFigure1.
Acquiringimagestocomputerfromcameraandcomputationalmedium(environment)via
framegrabberisthefirststepinfacerecognitionsystemapplications. The input image, in
the form of digital data, is sent to face detectionalgorithm part of a software for
extracting each face in the image. Many methods areavailable for detecting faces
in the images in the literature [1 - 29]. The availablemethods could be classified
into two main groups as; knowledge-based [1 - 15] andappearance-based [16 - 29]
methods. Briefly, knowledge-based methods are derivedfrom human knowledge
for features that makes a face. Appearance-based methodsare derived from
training and/or learning methods to find faces. The details about themethodswill
besummarized in thenext chapter.
Figure1.1:Stepsof FaceRecognitionSystemApplications
After faces are detected, the faces should be recognized to identify the persons in
thefaceimages.Intheliterature,mostofthemethodsusedimagesfromanavailableface
library, which is made of standard images [30 - 47]. After faces are
detected,standardimagesshouldbecreatedwithsomemethods.Whilethestandardimage
sarecreated,thefacescouldbesenttorecognitionalgorithm.Intheliterature,methodscanbe
dividedintotwogroupsas2Dand3Dbasedmethods.In2Dmethods,2Dimagesareusedasin
putandsomelearning/trainingmethodsareusedtoclassifytheidentificationofpeople[30-
43].In3Dmethods,thethree-dimensional data of face are used as an input for
recognition. Different approachesare used for recognition, i.e. using corresponding
point measure, average half face,and 3D geometric measure [44 - 47]. Details about
the methods will be explained inthenextsection.
Methodsforfacedetectionandrecognitionsystemscanbeaffectedbypose,presence or
absence of structural components, facial expression, occlusion,
imageorientation,imagingconditions,andtimedelay(forrecognition).Availableapplication
s developed by researchers can usually handle one or two effects
only,thereforetheyhavelimitedcapabilitieswithfocusonsomewell-structured application.
A robust face recognition system is difficult to develop which worksunderall conditions
with awidescopeofeffect.
Chapter2introducesfacedetection,facerecognition,andfacerecognitionsystemap
plications that exist inliterature.
Chapter3describestheoryoffacerecognitionsystemthatisbasedonproblemstatem
ent ofthesis.
Chapter4summarizestheexperimentsperformedandtheirresultsusingtheface
recognition system.
Chapter5discussesandconcludesthesis.
Chapter6givessomefutureworksonthethesistopic.
CHAPTER 2
LITERATURE SURVEY
Although face recognition systems are known for decades, there are many
activeresearchwork on thetopic. Thesubjectcan bedividedinto threeparts;
1. Detection
2. Recognition
3. Detection&Recognition
Face detection is the first step of face recognition system. Output of the detection
canbe location of face region as a whole, and location of face region with facial
features(i.e.eyes,mouth,eyebrow,noseetc.).Detectionmethodsintheliteraturearediffi
culttoclassifystrictly,becausemostofthealgorithmsarecombinationofmethodsfordetec
tingfacestoincreasetheaccuracy.Mainly,detectioncanbeclassified into two groups
as Knowledge-Based Methods and Image-Based Methods.Themethods
fordetection aregiveninFigure2.
Facialfeaturesareimportantinformationforhumanfacesandstandardimagescanbe
generated using these information. In literature, many detection algorithms
basedonfacial features areavailable[1 -6]. Zhi-fang et al. [1] detect faces and facial
features by extraction of skin like
regionwithYCbCrcolorspaceandedgesaredetectedintheskinlikeregion.Then,eyesar
e found with Principal Component Analysis (PCA) on the edged region.
Finally,Mouth is found based on geometrical information. Another approach
extracts
skinlikeregionwithNormalizedRGBcolorspaceandfaceisverifiedbytemplatematching.
Figure 1.2:MethodsforFaceDetection
Tofindeyes,eyebrowsandmouth,colorsnakesareappliedtoverifiedface image [2]. Ruan
and Yin [3] segment skin regions in YCbCr color space andfaces are verified with
Linear Support Vector Machine (SVM). For final verificationof face, eyes and mouth
are found with the information of Cb and Cr difference. Foreye region Cb value is
greater than Cr value and for mouth region Cr value is greaterthan Cb value. Another
application segments skin like regions with statistical model.
StatisticalmodelismadefromskincolorvaluesinCbandCrchannelinYCbCr color
space. Then, face candidates are chosen with respect to rectangular ratio
ofsegmented region. Finally, the candidates are verified with eye & mouth map
[4].Also, RGB color space can be used to segment skin like region and skin color
likeregionisextractedtobe facecandidate.Candidate isverifiedby
findingfacialfeatures. Eyes and mouth are found based on isosceles triangle
property. Two eyesand one mouth create an isosceles triangle and also distance
between two eyes anddistance from mid-point of eyes to mouth are equal. After
eyes and mouth is found,FeedForward Neural Network (FFNN) is used for final
verification of face candidate[5]. Bebar et al. [6] segment with YCbCr color space
and eyes & mouth are found
onthecombinationofsegmentedimageandedgedimage.Forfinalverification,horizontal
and vertical profiles of the images are used to verify the position of theeyes and
mouth. All the methods are using firstly skin segmentation to eliminatenon-
faceobjects in theimages to savecomputationaltime.
Skin color is one of the most significant features of human face. Skin color can
bemodeled with parameterized or non parameterized methods. Skin color region can
beidentified in terms of threshold region, elliptical modeling, statistical modeling
(i.e.Gaussian Modeling), or Neural Network. Skin color is described in all color
spaceslike RGB, YCbCr, and HSV. RGB is sensitive to light changes but YCbCr and
HSVare not sensitive to light changes. The reason is that these two color spaces
haveseparate intensity and color channel. In literature many algorithms based on
skincolor available [7 - 13]. Kherchaoui and Houacine [7] modeled skin color
usingGaussian Distribution Model with Cb and Cr channel in YCbCr color space.
Thenskinlikeregionischosenasafacecandidatewithrespecttotheboundingboxratioof the
region and candidates are verified with template matching. Another
methodpreprocesses the given image to remove background part as a first step. It is
done byapplying edge detection on the Y component of YCbCr color space. Then,
the closedregion is filled to take it as foreground part. After that, skin segmentation is
done onYCrCb color space with conditions. The segmented parts are taken as
candidate
andverificationisdonebycalculatingtheentropyofthecandidateimageandusethresholdin
g to verify face candidate [8]. Qiang-rong and Hua-lan [9] applied
whitebalancecorrectionbeforedetectingfaces.Thecolorvalueisimportantforsegmentatio
n,sowhileacquiringtheimagecolorsmayreflectfalsecolor.Toovercomethis,whitebalancec
orrectionshouldbedoneasafirststep.
Then Skin, color like regions are segmented using elliptical model in YCbCr. After
skin
regionsarefound,theycombinedwithedgedimagestograyscaleimage.Finally,thecomb
ined regions are verified as face by checking bounding box ratio and area
insidethebounding
box.Anotherapplicationsegmentsskinlikeregionwiththresholdvalue in Cb, Cr,
Normalized r and Normalized g. Then candidate for face is chosenwith respect to
bounding box ratio, ratio of area inside and area bounding box, andminimum area
of the region. After candidates are found, then AdaBoosting method isapplied to
find face candidates. The verification is done with combining both resultsfrom skin
like region and AdaBoosting [10]. Also, skin color can be modeled inelliptical
region in Cb and Cr channel in YCbCr color space. Skin like region
issegmentedifthecolorvalueisinsideellipticregionandcandidateregionsareverified
using template matching [11]. Peer et al. [12] detect faces using only
skinsegmentationinYCbCrcolorspaceandresearchersgeneratetheskincolorcondition
s in RGB color space as well. Another approach for skin color modeling isdone by
Self Organizing Map (SOM) Neural Network (NN). After skin segmentationis
applied, each segment is taken as candidate and verified if they can fit into
ellipticregion ornot [13].
Research papers on Face recognition systems are studied and state of the
currenttechnology are reviewed and summarized in the previous chapter, results of
whichwillguideustodesignafacerecognitionsystemforfuturehumanoidand/orguide/gu
ardrobot.Athroughoutsurveyhasrevealedthatvariousmethodsandcombinationofthes
emethodscanbeappliedindevelopmentofanewfacerecognition system. Among the
many possible approaches, we have decided to use acombination of knowledge-
based methods for face detection part and neural networkapproach for face
recognition part. The main reason in this selection is their smoothapplicability and
reliability issues. Our face recognition system approach is given inFigure4.
Our experiments reveal that skin segmentation, as a first step for face
detection,reducescomputationaltimeforsearchingwholeimage.Whilesegmentationisap
plied, only segmented region is searched weather the segment includes any face
ornot.
Figure3.1: AlgorithmofFaceDetectionPart
For this reason, skin segmentation is applied as a first step of detection part.
RGBcolor space is used to describe skin like color [12], and also other color
spaces
areexaminedforskinlikecolors,i.e.HSV&YCbCr[54],HSV[55],andRGB&YCbCr&HSV[
56].However,bestresultsgiveRGBcolorspaceskinsegmentation. The results of skin
segmentation on different color spaces are given inthenext chapter.
Skincolorlikepixel conditionsaregiven below [12]:
r>95 |r-g|>15
g>40 r>g
b>20 r>b
max(r,g,b)-min(r,g,b)>15
“r”, “g”, and “b” parameters are red, green and blue channel values of pixel. If
theseseven conditions are satisfied, then pixel is said to be skin color and binary
image iscreatedfrom satisfied pixels.
Whitebalanceofimagesdiffersduetochangeinlightingconditionsoftheenvironmentwhil
eacquiringimage.Thissituationcreatesnon-skinobjectsthatbelong to skin objects.
Therefore, white balance of the acquired image should becorrected before
segmenting it. The implemented white balance algorithm is givenbelow[57]:
Calculateaveragevalueofredchannel(Rav),greenchannel(Gav),andbluec
hannel(Bav)
CalculateaveragegrayGrayav=(Rav+Gav+Bav)/3
Then,KR=Grayav/Rav,KG=Grayav/Gav,andKB=Grayav/Bav
White balance algorithm, as a brief, makes image hotter if image is cold, and
makescolder if image is hot. If image appears as blue, then image is called as
cold. If imageappears as red or orange, then image is called as hot. Lighting
conditions in
thecaptureareaarealwayschanging,duetochangeinsunlightdirection,indoorlighting,
and other light reflections. Generally, taken pictures are hotter than theyshould be.
Figure 6 shows hotter image that is taken in capture area and skin
colorsegmentationto hotterimage, and whitebalancecorrectedimage.
If the image is not balanced, then some part of the wall will be taken as skin color
asin Figure 6. Under some lighting conditions, acquired image can be colder.
Then, thecolderimagewill bebalanced to hotterimage.
On the contrary, this process will generate unwanted skin color like regions. To
getrid of this problem and create final skin image, logical “and operation” is applied
onboth segmented originalimage and white balance corrected. This operation
willeliminate change of color value due to change of lighting condition. Also, bad
resultsof segmentation on uncorrected image and good results on corrected image
are givenin Figure 7. In uncorrected image, distinguishing of face part is hard and
face partseemsto bepart ofbackground.
n OI
C)WhiteBalanceCorrectiononOI(WBI) d.) Skin Segmentation on WBIFigure4
FIGURE 3.2:
Exampleoftaken/whitebalancecorrectedimageandskincoloursegmentation
After“andoperation”isappliedonsegmentedimages,somemorphologicaloperations
are applied on final skin image to search face candidate. Noisy like smallregions,
that are less than 100 pixel square area, are eliminated. Then,
morphologicalclosing operation is applied to merge gaps with 3-by-3 square
structure. Applyingdilation operation and then applying erosion operation are
considered as closingoperation. After these two morphological operations, face
candidate regions can bedetermined. To select candidate, each 1‟s are labeled.
On each label two conditionsare concerned to be face candidate. First condition is
ratio of bounding box,
whichcoversthelabel.Theratioofboundingbox,widthoverheight,shouldliebetween0.3
and 1.5. The limits determined experimentally. Lower limit is taken to be as lowas
possible, to get facial part that include neck or some part of chest. Other
conditionistocoversomegapsinsidetheregion.Thispropertywilldistinguishfacefromoth
er body part, i.e. hand. Segmentation on hand will have no gap which
makedifferentfrom face.
Face candidates are found after white balance correction, skin like color
detection,morphologicaloperation,andfacecandidatesearching.Forfinalverificationof
candidateandfaceimageextraction,facialfeatureextractionisapplied.Facialfeature is
one of the most significant features of face. Facial features are eyebrows,eyes,
mouth, nose, nose tip, cheek, etc. If some of the features can be found in
thecandidate,thenthecandidatewillbeconsideredasface.Twoeyesandmouthgenerat
e isosceles triangle, and distance between eye to eye and mid point of
eyesdistance to mouth is equal [5]. On the other hand, candidate facial feature
should beextracted from face candidate image, because it is difficult to determine
the features.Some filtering operations are applied to extract feature candidates and
steps are listed below:
LaplacianofGaussianFilteronRedchannelofcandidate
Contrastcorrectionto improvevisibilityoffilterresult
Averagefilteringtoeliminatesmallnoises
Convertingto binaryimage
Figure 8 shows that, facial features can be selected easily. Eyes, mouth line can
beselected and with some operations, it may be feasible for computers as well.
Afterobtaining filtered image, labeling operation is applied to determine which
labels arepossible to be facial features. Then, filtered image is divided into three
regions whichisillustrated in Figure9.
In Figure 9, R denotes right region, L denotes left region, and D denotes down
regionofface.
Criteriacheckingareappliedoneachlabeltodetermineleftand righteyes.
Criteriaarelistedbelow:
1. Width denotes width of face candidate image and height denotes
height offace candidateimage
2. ypositionofleft/righteyeshouldbelessthan0.5*height
3. xpositionofrighteyeshouldbeinregion of0.125*widthto0.405*width
4. xpositionoflefteyeshouldbeinregion of0.585*widthto0.875*width
5. Areashouldbegreaterthan100 pixelsquare
distance to center point of image isminimum in all left eye candidates. Left and
right eye are mostly found correctly butsometimes bottom eyelid is found falsely. If
left and right eyes are detected, thenmouthfindingapplication can beapplied.
Each label inside down region chooses as mouth candidate and candidate
propertyvectoriscalculated.Euclidiandistanceofrighteyetomouthcandidate(right-
distance) and Euclidian distance of left eye to mouth candidate (left-distance)
arecalculated. Also, Euclidian distance between two eyes (eye-distance) and
Euclidiandistancebetweenmidpointofeyestomouthcandidate(center-
distance)arecalculated.Then, propertyvectoris created byusingthedistances.
Propertyvector:
Labelnumberofthemouth candidate
Absolutedifferencebetweenleft-distanceandright-distance(error1)
Absolutedifferencebetweeneye-distanceandcenter-distance(error2)
Summationoferror1and error2(error-sum)
If error1 and error2 are smaller than 0.25*eye-distance, then candidate is possibly
amouth. Minimum error-sum inside possible mouths is considered as mouth.
Requiredfacial features are found which are right eye, left eye and mouth. Face
image can
beextractedwhichcoverstwoeyesandmouth.Facecoveringiscreatedwitharectanglein
which cornerpositions are;
Rightupcorner:0.3*eye-distanceupandleftfromrighteyelabelcentroid,
Leftupcorner:0.3*eye-distanceupandrightfromlefteyelabelcentroid,
Right down corner: 0.3*eye-distance from left from right eye label
After face cover corner points are calculated, face image can be extracted.
Facialfeatureextraction,coveringandfaceimage extraction are givenin Figure10.
Up to here, face detection partis completed, and face images are found in
theacquired images. This algorithm is implemented using MATLAB and tested for
morethanhundredimages.Thisalgorithmdetectsnotonlyonefacebutalsomorethanone
face.Small amountof orientedface are acceptable. Results are satisfactory
forallpurpose.
3.3 FACERECOGNITIONPART
Modified face image which is obtained in the Face recognition system, should to
beclassified to identify the person in the database.This is face recognition part of
aFace Recognition System. Face recognition part is composed of preprocessing
faceimage, vectorizing image matrix, database generation, and then classification.
Theclassification is achieved by using FeedForward Neural Network (FFNN) [39].
Facerecognitionpart algorithm is given.
Beforeclassifyingthefaceimage,itshouldbepreprocessed.Preprocessingoperations
are histogram equalizing of grayscale face image, resizing to 30-by-30pixels, and
finally vectorizing the matrix image. Histogram equalizing is used forcontrast
adjustment. After histogram equalization is applied, input face image issimilar to
faces in database. Input face image has a resolution about 110-by-130pixels which
is large for computation of classifier. So, dimension reduction is madewithresizing
imagesto30-by-30pixelsimagetoreducecomputationaltimeinclassification. After
resizing, image matrix should be converted to vector becauseclassifier does not
work with two-dimensional input. Input vector size will be 900-by-1vectorto
classifier.
Neural Network is used to classify given images. Neural Network is a
mathematicalmodelthatisinspiredfrombiologicalneuralnetworksystem.Neuralnetwor
kconsists of neurons, weights, inputs and output. A simple neuron model is given
inFigure 12. Inside neuron, summation (∑) and activation function (f) operations
areapplied. „x‟ denotesinput of neuron, „w‟ denotes weight of input, „I‟ denotes
outputof summation operation, and „y‟ denotes output of neuron or output of
activationfunction. Equations of I and y is given in Eq.1 and Eq.2. Network
structure may bemultilayered(Figure13).
I= ∑(x1*w1+x2*w2+…+xn*wn) (3.1)
y= f(I) (3.2)
x1
x2
f
. I y
.
Figure3.7: NeuronModel
Figure3.8: MultilayerNetworkstructure
Also,many
differenttypeofnetworkstructuresexistinliterature.Inclassifier,FeedForward Neural
Network (FFNN) is used. FFNN is the simplest structure in theneural network.
Figure 13 is a kind of FFNN. Information flows through input tooutput and does not
perform any loop or cycle operations. Two-layer with sigmoidtransfer function
FeedForward Neural Network is used for classification operation.This type of
network structure is generally used for pattern recognition
applications.Systemnetwork properties are:inputlayer has 900 inputs, hidden layer
has 41neurons and output layer has 26 neurons. Output layer has 26 neuron since
thenumberofpeoplein databaseis 26.
3.4.OUTPUT PART
This part is final step of face recognition system. Person name is determined
withrespect to output of face recognition. Output vector of neural network is used
toidentify person name. The row number which has maximum value is match
withsamerownumberin thedatabasenameorder.
Face recognition system has four main steps, which are input, detection,
recognition. and output Input performs image acquisition part, which converts live
capturedimage to digital image data. Detection part composed of white balance
correction toacquired image, skin like region segmentation, facial feature
extraction, and faceimage extraction. White balance correction is an important step
to eliminate colorchages of acquired imade due to illumination conditions change.
Skin like
regionsegmentationperformancecanbeimprovedwithintegratingwhitebalancecorrect
ion before segmenting. Skin color like region segmentation decreases searchtime
for possible face region since only segmented regions are considered as
regionmay contain face. Facial feature extraction is important to extract face image
whichwill be standard face image. LoG filter gives best results to extract facial
featureswith respect to black and white convertion. Facial features are found with
property oftwoeyes andamouth creates isosceles triangle.
This face recognition system algorithm will perform fast and accurate person
nameidentification. Performance of skin segmentation is improved with white
balancecorrection and facial feature extraction performance is improved with LoG
filter withrespect to Lin‟s implementation [5]. Accuracy of classification is improved
withdecreasinggradient valueofperformancevalue.
CHAPTER-4
4.1.SYSTEM HARDWARE:
Systemhasthreemainhardwareparts.Theyarecomputer,framegrabber,andcamera.
Computer is brain of system, which processes acquired image,
analyzesimageanddeterminesthepersonsname.Thecomputerusedinthetestisatypic
alPCwith thefollowingspecifications:
IntelCore2Duo3.0GHz
3.0GbRAM
OnBoardGraphicCard
Sony EVI-D100P camera is used in Face recognition system (Figure 15). Camera
hashigh quality CCD sensor with remote Pan/Tilt/Zoom (PTZ) operations. Camera
has10x optic and 4x digital zoom, so totally 40x zoom capability. Optically, it has
3.1mm wide angle to 31 mm tele angle focal length and 1.8 to 2.9 minimum
aperturecapacities. Resolution of CCD sensor is 752x582 pixel. Pan/Tilt capacity is
±100° forpan and ±25° for tilt operations. Camera has RS232 serial
communication. Camerasettings and PTZ operation can be performed via serial
communication. Camerasettings are shutter time, aperture value, white balance
selection, etc. Camera videooutputsignalsareS-
VideoandCompositesignals.Compositevideosignalisused.
Figure-4.1: SonyEVI-D100p
Figure-4.2: PXC200AFrameGrabber
4.2.SYSTEM SOFTWARE:
Algorithm of system is implemented on MATLAB R2011a software. MATLAB is
aproductionofMathWorksCo.andcanperformalgorithmdevelopment,datavisualizatio
n,dataanalysis,andnumericcomputationwithtraditionalprogramming language i.e.
C. Signal processing, image processing, controller design,
mathematicalcomputation, etc. may be implemented easily with MATLAB that
includes
manytoolboxeswhichsimplifiesgenerationofalgorithmmorepowerfully.ImageAcquisiti
on Toolbox, Image Processing Toolbox, and Neural Network Toolbox
areusedwhilegeneratingalgorithm ofFacerecognition system.
4.3FACE DETECTION:
First implementation of system is performed on detection of faces in acquired
image.Therefore,facedetectionhasstartedwithskinlikeregionsegmentation.Manymet
hods have been tried to select which segmentation algorithm works best on
ourimage acquisition area. Based on RGB [12], HSV&YCbCr [54], HSV [55],
andRGB&YCbCr&HSV[56]colorchannelsskinlikesegmentationaretestedonacquiredi
magesandbestresultsaretakenfromRGBcolorspace.RGB&YCbCr&HSV are not
performed well, based on our acquired images. Resultsofperformed skin
likesegmentation aregiveninFigure17-19.
Figure4.5:SkinSegmentationonOriginalimagewithHCbCrCombinations
Besides RGB gives the best result, colors of wall inside laboratory can be skin
likecolor due to white balance value of camera. Unwanted skin like color regions
canaffectdetectionanddistortfaceshape.Thiscolorproblemcanbeeliminatedbywhite
balance correction of acquired image. The implementation of white balancecorrection
is given Figure 20. Wardrobe color is white in real (Figure 20). On theother hand,
color in acquired image (left image) is cream and also wall color
lookslikeskincoloranditaffectssegmentationresults.Figure21showstheresultsof
segmentation on acquired image and white balance corrected image. Results
showsthatwhitebalance correction should beapplied afterimageis acquired.
Figure4.6: AnImageWithout(Left)and With (Right)WhiteBalanceCorrection
Figure 4.8: Test Image 1 (Left) & Vertical (Right-Top) - Horizontal (Right-
Bottom)
Figure 4.9: Test Image 2 (Left) & Vertical (Right-Top) - Horizontal (Right-
Bottom)Profiles
Figure 4.10: Test Image 3 (Left) & Vertical (Right-Top) - Horizontal (Right-
Bottom)Profiles
Figure 4.11: Test Image 4 (Left) & Vertical (Right-Top) - Vertical (Right-
Bottom)Profile
Figure 4.12:TestImage5(Left)&Black-WhiteConversiononthe5(Right)
Figure4.13: TestImage6(Left)&Black-WhiteConversiononthe6(Right)
RighteyeisisolatedbutlefteyeiscombinedwitheyebrowinFigure26.Also,mouth is
nearly erased. On the other hand, right eye and mouth are combined
withbackgroundinFigure27.Thatmakesdifficulttofindeyeandmouth.Thecombination
problem is due to lighting conditions while acquiring the image. Since,Black-White
conversion is sensitive to light condition/changes, this approaches cannot be
applicable easily. So, approaches that are not much sensitive to light
shouldbeprefered.
Edge detection methods can be applicable on this problem because they are
nearlyinsensitive to light change. Sobel edge detector is used to extract features.
Figure 28showsresults ofedgedetection on test image5and 6.
40
Results show that, edge detection is not sensitive to light condition as Black-
Whiteconversion. On both images, eyes and mouth can be selected with human
eyes butmouth can be difficult to extract on the images and eye parts also vary on
shapes.Also,edgedetection hashigh responses.
In order to use edge detection, Laplacian of Gaussian (LoG) filter can be used.
LoGfilter has low responses than edge detection. Itmakes usefull enhancements on
facialfeatures.Figure29shows resultsofLoGfilterontest image5and 6.
Results of LoG filter are better than previous three trials. Mouth is more
significantthanothers and eyes canbeselected moreaccurately.
4.4FACE RECOGNITION:
41
Figure 4.16:26ParticipantsFaceImagesFromDatabase.
Name of the Participant are: Natasha, Thor, Bruce, Tony, Steve, Clint, Lizzy, Nick,
Clint, Tom, Peter, Harry, Nick, Ari, Jessy, Eric, Damon, Klaus, Stefan, Enzo,
Mathew, Loki, Tessa, Mike, Dustin, William, Loki, Hari, Henry, Robert,
Christopher, Scarlet. While generating database, four different sample images are
stored for each
persons.Thereasonisthatacquisitionoffaceimagemaydiffereachtimetheimagetaken.
For example, shaved and no-shaved faces are included for my samples (Left
image inFigure). Also, differentcaptured face framesare added (Rightimagein
Figure).
Figure4.17DifferentFaceSamples
900x104 size training matrix is generated to train neural network which will be
usedtoclassify
givennewfaceimage.PatternRecognitionToolinNeuralNetworkToolboxisusedtogen
erateandtrainneuralnetwork.Thegeneratednetworkconsists of 2 layers with
sigmoid transfer function. 2 layers are hidden layer andoutput layer. Output layer
has 26 neurons due to number of persons in the facedatabase. Hidden layer
number is an approach that is applied in [22]. The approachproposes that to
guess initial neuron number, use Eq. 4.1, then train with this neuronnumber and
record training time. After that, increase neuron number until trainingtime remain
constant. When starting point of remaining constant, this will be numberofhidden
neurons in thenetwork.
42
n=log2𝑁
Nisnumberofinputlayer,andnisthenumberofneuronsinhiddenlayer.Theinitial guess
is 9.81 for 900 inputs. So, start initially as 10. Figure 32 shows graph ofnumber of
neurons vs. training time. The graphshows that at 41 neurons trainingtime is 4 s
and after this neuron number training time remain constant in 5
seconds.Therefore, 41 neurons are used in our system. Databasing and training
of networkcodeis given in App
2.Also,performanceofclassificationisaffectedbytrainingparameterwhichisgradient
value. Gradient value is related with error of target value and output value.Tests
show that aminimum gradient valueresults in more accurate classification.The low
gradient value causes false selection in the database. The comparison ofgradient
values for errors 1e-6 and 1e-17 for the same input image is given TestImage2
(Table1).
Figure4.18:NumberofNeurons vs.TrainingTime
Table1showsthatminimumgradientvalueshouldbeconsideredinsystemtogetmore
accurateresults.
Table4.1GradientValueEffectsonClassification
1e-6 1e-17
0.0076 0.0000
0.0041 0.0000
98.0832 99.99
25
0.6032 0.0049
0.0040 0.0000
43
0.0989 0.0000
0.0000 0.0000
0.0766 0.0000
0.0000 0.0000
0.0799 0.0000
0.6118 0.0000
0.0015 0.0000
0.0001 0.0000
0.0968 0.0000
0.0530 0.0000
0.0021 0.0000
0.0009 0.0000
0.2094 0.0000
0.1310 0.0000
12.4808 0.1210
0.2932 0.0000
0.0164 0.0000
0.7166 0.0000
0.0006 0.0000
0.0033 0.0000
6.9192 0.0004
Finally,facedetectionandrecognitionpartsaremergedtoimplementfacerecognition
system. System can also handle more than one faces in the acquiredimage. Code
is generated on MATLAB environment and given in App. 3 Results areshownin
Figure33 -43.
Fiveskinlikeregionsareextractedandlabeled.Label4&5istakenasfacecandidate.
Facial feature extraction operation is performed and eyes and mouth arefound,
then faces are validated. Validated faces are classified, output results are;
firstface belongs to Ayça and second face belongs to Cahit. Output result of
system givescorrect results. Experiment and results are shown that algorithm can
find multiplefaces in the acquired image and classify correctly. This results
important, since somemethodscan onlydetectonefaceinagiven image.
Fourskinlikeregionsarefoundinacquiredimage2.Thirdlabelisconsideredasface
candidate. After LoG filter is applied, eyelashes appear clearly. Then,
44
algorithmconsiders eyelashes are eyes. Extracted face image classified correctly.
Above resultsshow thatalgorithmcan recognizewheneyesareclosedwith
99.0498networkoutput.
Thistime,ifpersonstandsfarfromcamera,cansystemdetectandrecognizecorrectly or
not is tested. Five skin like regions are labeled. Then, label three is takenas face
candidate. Taking height of face candidate as 1.28 times bigger than
widtheliminates neck of person. Since, only face part is considered as face
candidate. LoGfilter performs well to extract facial feature regions. Due to low
resolution, both leftand right, eye and eyebrow are merged but centroids of
merged region do not affectthe results. Low resolution of extracted face image is
classified correctly. Aboveresultshowthatfaceimage can
berecognizedcorrectlyevenresolution isnotlarge.
Manyexperimentsareperformedonliveacquiredimages.Facedetectionandrecognitio
n parts performed well. Skin segmentation both decrease computationaltime and
search area for face. Experiments show that connection is established
wellbetweendetectionandrecognitionparts.Thenetworkcancorrectlyclassifywhen
eye/eyes are closed, eyebrows are moved and face is smiled or showed teeth.
Also,number of people in database can be increased and most probably will
correctlyclassifyfaces.
Somelimitationsofthedesignedsystemaredeterminedafterthe experiments:
45
segmentationstagebutitwillaffectresultsoffacecandidate.Mostofexperimentswithski
ncolorlikeclothesshowthat,faceandclothsegmentsaremerged.Thesoftwaredoesnotr
ecognizethemastwoseparatebodyregions.Thus,facialfeatureextractionoperation
can be resulted, as candidate is not a face or wrong face image extraction.
Presenceofobjectonface:Glassesonthefacemayaffectfacialfeatureextractionresul
ts.Ifglasseshavereflection,LoGfiltercannotperformwelloneyeregion.Also,sunglasse
swillcovereyeregionandeyescannotbedetectedbasedontheproposedalgorithm.
Contrast of face candidate image:Contrast value of image affects results of
filter.Less contrast image has fewer edges on the image. Thus, facial component
could notbe visible after filtered. Therefore, face image can not be extracted if
candidatecontainsface.
Systemworkingrange:Systemcandetectandrecognizethepersonifpersonstandingr
angeisbetween50cmto180cm.Thisrangeisacquiredwithpropertyof3.1 mm focal
length and 768x576 pixels resolution. Thus, working range can
bechangedbycameraproperty.
Skin color range:RGB skin color segmentation work well in the range of light
tonetodark tone.
Headpose:Frontalheadposescanbedetectedandextractedcorrectly.Smallamountof
roll andyawrotation canbeacceptableforsystem.
46
CHAPTER 5
CONCLUSION AND DISCUSSION
5.1.DISCUSSION
47
performed on bothsegmentedimages to reducecolorproblem.
Face candidates are chosen from segments and facial feature extraction
operation isperformed to verify face candidate and extract face image. LoG
filter is performed
toshowfacialcomponentsclearly.BeforeLoGfilterisperformed,black-
whiteconversion and edge detection are performed. Black-white conversion
is sensitive tolight changes and some components can be eliminated due to
shadowing on face. Onthe other hand,edge detection is not sensitive to light
changes but shapes are notclear as LoG filter. Facial components can be
selected clearly with eyes after appliedLoG filter. Two eyes and mouth are
found with property oftwo eyes and a mouthcreate isosceles triangle. Eyes
and mouth are found with this property and face imageis extracted based on
positions of facial components. On the other hand, componentsarefound
byestimation but can befound moreaccurately.
Withextractionoffacialcomponents,facedetectionpartiscompletedandfaceima
ge is ready to be classified. Before sending to classifier, histogram
equalization,resizing and vectorizing operations are performed. Histogram
equalization is appliedto eliminate light changes on image and equalizing
contrast of image. Finally,
faceimageisreadytobeclassified.ClassificationisperformedbytwolayersFeedF
orward Neural Network. Sigmoid function is used for activation function
inneurons. This type of network structure and activation function are good at
patternrecognition problem since face recognition is a kind of pattern
recognition
problem.Inthehiddenlayer,41neuronsareusedandtrainingtimevs.numberofneu
rongraph is given in Figure 32. Best performance is achieved by 41 neurons.
Outputlayerneuron numberis determined bynumberofpeoplein thedatabase.
Output of network gives classification result. The row with a maximum value
givesorder number of names in database. Classification result is affected by
performancevalue of network. The minimum gradient value gives more
accurate result. Gradientvalue for system while training is taken as 1e-17.
Performance with lower gradientvalueis given in Table1.
48
Algorithm is developed on MATLAB environment and it gives capability to
detectmultiplefacesinacquiredimage.Personnamingisachievedwhenmaximu
mvalueof row is greather than 90%. If it is lower, output is “Person is not
recognized”. Thesystemhasacceptableperformancetorecognizefaces
withinintendedlimits.
5.2.CONCLUSION
RGB color space is used to specify skin color values, and segmentation
decreasessearching time of face images. Facial components on face
candidates are appearedwith implementation of LoG filter. LoG filter shows
good performance on extractingfacialcomponents underdifferent
illuminationconditions.
FFNNisperformedtoclassify
tosolvepatternrecognitionproblemsincefacerecognitionisakindofpatternrecog
nition.Classificationresultisaccurate.Classificationisalsoflexibleandcorrectwh
49
enextractedfaceimageissmalloriented,closedeye,andsmallsmiled.Proposedal
gorithmiscapableof detectmultiplefaces, andperformanceofsystem
hasacceptablegood results.
Proposedsystemcanbeaffectedbypose,presenceorabsenceofstructuralcomp
onents,facialexpression, imagingcondition, andstrongillumination.
CHAPTER 6
FUTURE WORKS
Face recognition system is designed, implemented and tested. Test results show
thatsystemhasacceptableperformance.Ontheotherhand,systemhassomefuturewor
ksforimprovementsand implementation onhumanoid robot project.
Future works will be stated in the order of algorithm. First future work can
beapplied on camera device to improve imaging conditions. Sony camera that is
used inthesis, can communicate with computer. Camera configurations can be
changed viacomputer and these changescan improve imaging conditions.
Exposure values canbe fixed to capture all frames with same brightness value /
similar histogram. Also,fixing white balance value can improve performance of
skin segmentation which willlead to eliminate non-skin objects. Maybe, white
balance correction section may notbe needed any more. For later
implementations, pan, tilt and zoom actuators can
becontrolled.Cameraiscontrolled viaremotecontrollerinthetestofthesis.
Then skin color modelling can be improved. In the thesis work, some conditions
areused to describe skin color. On the other hand, broad skin color modelling can
beachieved by use of statistical modelling. Dark skins, skins under shadow or
brightlight can be modelled and more skin region segmentation can be achieved.
Skin colorsegmentation is an important step for algorithm of system. If more
correct skinregions are segmented, more faces can be detected. Also, instead of
RGB,
YCbCrskincolormodellingwithstatisticalmodelcanbeperformed,sinceCbandCrchan
nelsvaluesarenot sensitiveto light changes.
50
extractionsection in face detection part. Computational volume is the biggest with
respect toother sections in the algorithm. Computations of facial feature extraction
can bereduced.Otherpointis
thattocalculateeyeorientation,whichwillbeusedtoreorient face candidate and
extract horizontally oriented face image. This
operationwilldecreaseworkinglimitations ofdetection part.
51
CHAPTER 7
REFERENCES
[1] Bernie DiDario, Michael Dobson, and Douglas Ahlers, February 16, 2006. United
States Patent Application Publication, Pub. No.: US 2006/0035205 A1. "Attendance
Tracking System," United States Patent Application Publication, Pub. No: US
2006/0035205 A1.
[3] S. Kherchaoui and A. Houacine, "Face Detection Based on A Model of Skin Color
With Constraints and Template Matching," in Proc. 2010 International Conference on
Machine and Web Intelligence, Algiers, Algeria, pp. 469 - 472.
[5] Xiang-Yu Li and Zhen-Xian Lin, Xiang-Yu Li and Zhen-Xian Lin, Xiang-Yu Li and
Zhen-Xian Lin, X "Face recognition using a quick PCA algorithm and the HOG
algorithm." The Euro-China Conference on Intelligent Data Analysis and Applications
is a collaboration between Europe and China. Cham. Springer.
[6] June 2012, Cahit Gurel and Abdulkadir Erden. The 15th Conference on Machine
Design International and Production, "Design of a Face Recognition System."
Pamukkale, Denizli, Turkey, June 19–22, 2012.
52
[7] Himanshu Tiwari, "Live Attendance System using Face Recognition,"
International Journal for Research in Applied Science and Engineering Technology
(IJRASET), Volume 6 Issue IV, ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor:
6.887.
[8] Kaneez Laila Bhatti, Laraib Mughal, Faheem Yar Khuhawar, and Sheeraz Ahmed
Memon, "Smart Attendance Management System Using Face Recognition," MUET,
Jamshoro, Pakistan.
[9] N. Rekha and M. Z. Kurian, "Face identification in real time based on HOG,"
2014. The International Journal of Advanced Research in Computer Engineering and
Technology is a publication dedicated to cutting-edge research in the field of
computer engineering and technology (IJARCET).
[10] Lyon, M.J., Akamatsu, S., Kamachi, M., and Gyoba, J. (1998). The 3rd IEEE
International Conference on Automatic Face and Gesture Recognition is published in
the proceedings.
Page 205 of 200.
[11] Tan, K.Y., and SEE, A. K. B. [11] Tan, K.Y., and SEE, A. K. B. (2005). Facial
Recognition Technology: A Comparison and Implementation of Various Methods
CGST International Journal on Graphics, Vision, and Image Processing, Volume 5,
Issue 9, Pages 11-19.
[12] Pang, Y. Zhang, L. LI, M. Liu, Z. Liu, Z. Liu, Z. Liu, Z. Liu, Z. Liu, Z. Liu, Z
(2004). A Face Recognition Method Based on Gabor-LDA. In Computer Science
Lecture Notes. Germany's Springer-Verlag. This is issue 3331.
[13] A. Amine, S. Ghouzali, and M. Rziza (2006). Face Detection Using Skin Color
Information in Still Color Images. The Second International Symposium on
Communications, Control, and Signal Processing's Proceedings.
[14] Y.N. Chae, J.N. Chung, and H.S. Yang (2008). Efficient Face Detection Using
Color Filtering In IEEE's 19th International Conference on Pattern Recognition
Proceedings. 1–4 pages
53
International Conference on Pattern Recognition, Vol. 1, page 1056–1059.
CHAPTER 8
SOURCE CODE
% singleton*.
% existing singleton*. Starting from the left, property value pairs are
54
% applied to the GUI before main_OpeningFcn gets called. An
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
gui_Singleton = 1;
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
gui_State.gui_Callback = str2func(varargin{1});
end
55
if nargout
else
gui_mainfcn(gui_State, varargin{:});
end
handles.output = hObject;
guidata(hObject, handles);
56
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command line.
global co
clc
warning off
st = version;
if str2double(st(1)) < 8
beep
pause(3)
delete(hx)
close(gcf)
return
57
end
co = get(hObject,'color');
addpath(pwd,'database','codes')
if size(ls('database'),2) == 2
% delete('features.mat');
% delete('info.mat');
end
varargout{1} = handles.output;
58
% hObject handle to edit1 (see GCBO)
% handles empty - handles not created until after all CreateFcns called
set(hObject,'BackgroundColor','white');
end
p = get(handles.edit1,'UserData');
if strcmp(p,'123') == 1
delete(hObject);
delete(handles.pushbutton2)
delete(handles.edit1);
delete(handles.text2);
59
delete(handles.text3);
delete(handles.text1);
delete(handles.text4);
set(handles.AD_NW_IMAGE,'enable','on')
set(handles.DE_LETE,'enable','on')
set(handles.TRAIN_ING,'enable','on')
set(handles.STA_RT,'enable','on')
set(handles.RESET_ALL,'enable','on')
set(handles.EXI_T,'enable','on')
set(handles.HE_LP,'enable','on')
set(handles.DATA_BASE,'enable','on')
set(handles.text5,'visible','on')
else
end
60
close gcf
% --------------------------------------------------------------------
% --------------------------------------------------------------------
% --------------------------------------------------------------------
61
% --------------------------------------------------------------------
% --------------------------------------------------------------------
% --------------------------------------------------------------------
% --------------------------------------------------------------------
62
% hObject handle to EXI_T (see GCBO)
% --------------------------------------------------------------------
% --------------------------------------------------------------------
winopen('help.pdf')
% --------------------------------------------------------------------
63
% handles structure with handles and user data (see GUIDATA)
if exist('features.mat','file') == 0
return
end
ff = dir('database');
if length(ff) == 2
for k = 1:100
waitbar(k/100)
pause(0.03)
end
close(h)
return
end
fd = vision.CascadeObjectDetector();
if f == 0
return
end
64
p1 = fullfile(p,f);
im = imread(p1);
vo = insertObjectAnnotation(im,'rectangle',bbox,'FACE');
r = size(bbox,1);
if isempty(bbox)
axes(handles.axes1)
imshow(vo);
uiwait
cla(handles.axes1); reset(handles.axes1);
set(handles.axes1,'box','on','xtick',[],'ytick',[])
return
elseif r > 1
axes(handles.axes1)
imshow(vo);
uiwait
cla(handles.axes1); reset(handles.axes1);
set(handles.axes1,'box','on','xtick',[],'ytick',[])
return
end
65
axes(handles.axes1)
image(vo);
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
if strcmp(bx,'MANUALLY') == 1
while 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imc = imcrop(im);
if size(bbox1,1) ~= 1
uiwait
else
close gcf
break
end
close gcf
end
image(imc)
66
text(20,20,'\bfUr Precaptured image.','fontsize',12,'color','y','fontname','comic sans
ms')
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
end
if strcmp(bx,'AUTO') == 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imshow(imc)
if strcmpi(qx,'proceed') == 1
close gcf
axes(handles.axes1)
image(imc)
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
elseif strcmpi(qx,'manual') == 1
while 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
67
imc = imcrop(im);
if size(bbox1,1) ~= 1
uiwait
else
break
end
close gcf
end
close gcf
axes(handles.axes1)
image(imc)
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
else
end
end
immxx = getimage(handles.axes1);
zz = findsimilar(immxx);
68
zz = strtrim(zz);
q1= ehd(immxx,0.1);
q2 = ehd(fxz,0.1);
q3 = pdist([q1 ; q2]);
disp(q3)
if q3 < 0.5
axes(handles.axes2)
image(fxz)
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
set(handles.axes2,'xtick',[],'ytick',[],'box','on')
xs = load('info.mat');
xs1 = xs.z2;
for k = 1:length(xs1)
st = xs1{k};
stx = st{1};
if strcmp(stx,zz) == 1
str = st{2};
break
end
end
69
fid = fopen('attendence_sheet.txt','a');
c = clock;
if c(4) > 12
else
end
fclose(fid);
try
s = serial('com22');
fopen(s);
fwrite(s,'A');
pause(1)
fclose(s);
clear s
catch
uiwait
70
delete(s)
clear s
end
else
cla(handles.axes1)
reset(handles.axes1)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5);
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
end
% --------------------------------------------------------------------
global co
if exist('features.mat','file') == 0
71
msgbox('FIRST TRAIN YOUR DATABASE','INFO...!!!','MODAL')
return
end
ff = dir('database');
if length(ff) == 2
for k = 1:100
waitbar(k/100)
pause(0.03)
end
close(h)
return
end
if isfield(handles,'vdx')
vid = handles.vdx;
stoppreview(vid)
delete(vid)
handles = rmfield(handles,'vdx');
guidata(hObject,handles)
72
cla(handles.axes1)
reset(handles.axes1)
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
end
info = imaqhwinfo('winvideo');
did = info.DeviceIDs;
if isempty(did)
return
end
fd = vision.CascadeObjectDetector();
did = cell2mat(did);
for k = 1:length(did)
devinfo = imaqhwinfo('winvideo',k);
na(1,k) = {devinfo.DeviceName};
sr(1,k) = {devinfo.SupportedFormats};
end
73
DEVICE','liststring',na,'ListSize', [125, 75],'SelectionMode','single');
if b == 0
return
end
if b ~= 0
frmt = sr{1,a};
if b1 == 0
return
end
end
frmt = frmt{a1};
l = find(frmt == '_');
l = find(res == 'x');
axes(handles.axes1)
vr = [res1 res2];
nbands = get(vid,'NumberofBands');
74
preview(vid,h2im);
handles.vdx = vid;
guidata(hObject,handles)
pause(1)
delete(tx)
kx = 0;
while 1
im = getframe(handles.axes1);
im = im.cdata;
vo = insertObjectAnnotation(im,'rectangle',bbox,'FACE');
axes(handles.axes2)
imshow(vo)
if size(bbox,1) > 1
uiwait
stoppreview(vid)
delete(vid)
handles = rmfield(handles,'vdx');
guidata(hObject,handles)
cla(handles.axes1)
75
reset(handles.axes1)
set(handles.axes1,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes2,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
return
end
kx = kx + 1;
break
end
end
axes(handles.axes1)
image(imx)
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
immxx = imx;
zz = findsimilar(immxx);
zz = strtrim(zz);
76
fxz = imread(['database/' zz]);
q1= ehd(immxx,0.1);
q2 = ehd(fxz,0.1);
q3 = pdist([q1 ; q2]);
disp(q3)
if q3 < 0.5
axes(handles.axes2)
image(fxz)
set(handles.axes1,'xtick',[],'ytick',[],'box','on')
set(handles.axes2,'xtick',[],'ytick',[],'box','on')
xs = load('info.mat');
xs1 = xs.z2;
for k = 1:length(xs1)
st = xs1{k};
stx = st{1};
if strcmp(stx,zz) == 1
str = st{2};
break
end
end
fid = fopen('attendence_sheet.txt','a');
77
fprintf(fid,'%s %s %s %s\r\n\n', 'Name','Date','Time',
'Attendence');
c = clock;
if c(4) > 12
else
end
fclose(fid);
try
s = serial('com22');
fopen(s);
fwrite(s,'A');
pause(1)
fclose(s);
clear s
catch
uiwait
delete(s)
78
clear s
end
else
cla(handles.axes1)
reset(handles.axes1)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5);
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
end
% --------------------------------------------------------------------
flist = dir('database');
if length(flist) == 2
msgbox('NOTHING TO DELETE','INFO','modal');
return
end
79
cd('database')
if f == 0
cd ..
return
end
p1 = fullfile(p,f);
delete(p1)
flist = dir(pwd);
if length(flist) == 2
cd ..
return
end
for k = 3:length(flist)
z = flist(k).name;
nlist(k-2) = str2double(z);
end
nlist = sort(nlist);
for k = 1:length(nlist)
if k ~= nlist(k)
80
p = nlist(k);
waitbar((k-2)/length(flist),h,sprintf('RENAMED %s to %s',[num2str(p)
'.jpg'],[num2str(k) '.jpg']))
end
pause(.5)
end
close(h)
cd ..
% --------------------------------------------------------------------
flist = dir('database');
if length(flist) == 2
msgbox('NOTHING TO DELETE','INFO','modal');
return
end
for k = 3:length(flist)
na1(k-2,1) = {flist(k).name};
81
end
if b == 0
return
end
cd ('database')
for k = 1:length(a)
str = na1{k};
delete(str)
end
cd ..
flist = dir('database');
if length(flist) == 2
return
end
cd('database')
flist = dir(pwd);
for k = 3:length(flist)
z = flist(k).name;
nlist(k-2) = str2double(z);
82
end
nlist = sort(nlist);
for k = 1:length(nlist)
if k ~= nlist(k)
p = nlist(k);
waitbar((k-2)/length(flist),h,sprintf('RENAMED %s to %s',[num2str(p)
'.jpg'],[num2str(k) '.jpg']))
end
pause(.5)
end
close(h)
cd ..
% --------------------------------------------------------------------
83
if f == 0
return
end
p1 = fullfile(p,f);
im = imread(p1);
fd = vision.CascadeObjectDetector();
vo = insertObjectAnnotation(im,'rectangle',bbox,'FACE');
r = size(bbox,1);
if isempty(bbox)
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imshow(vo);
uiwait
delete(fhx)
return
elseif r > 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imshow(vo);
84
IMAGE'},'WARNING...!!!','warn','modal')
uiwait
delete(fhx)
return
end
if strcmp(bx,'MANUALLY') == 1
while 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imc = imcrop(im);
if size(bbox1,1) ~= 1
uiwait
else
break
end
close gcf
end
close gcf
85
imc = imresize(imc,[300 300]);
cd ('database');
l = length(dir(pwd));
n = [int2str(l-1) '.jpg'];
imwrite(imc,n);
cd ..
while 1
qq = inputdlg('WHAT IS UR NAME?','FILL');
if isempty(qq)
uiwait
else
break
end
end
qq = qq{1};
if exist('info.mat','file') == 2
load ('info.mat')
r = size(z2,1);
z2{r+1,1} = {n , qq};
save('info.mat','z2')
else
86
z2{1,1} = {n,qq};
save('info.mat','z2')
end
end
if strcmp(bx,'AUTO') == 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imshow(imc)
if strcmpi(qx,'proceed') == 1
cd ('database');
l = length(dir(pwd));
n = [int2str(l-1) '.jpg'];
imwrite(imc,n);
cd ..
while 1
qq = inputdlg('WHAT IS UR NAME?','FILL');
if isempty(qq)
87
uiwait
else
break
end
end
qq = qq{1};
if exist('info.mat','file') == 2
load ('info.mat')
r = size(z2,1);
z2{r+1,1} = {n , qq};
save('info.mat','z2')
else
z2{1,1} = {n,qq};
save('info.mat','z2')
end
close gcf
elseif strcmpi(qx,'manual') == 1
while 1
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imc = imcrop(im);
if size(bbox1,1) ~= 1
88
msgbox({'YOU HAVENT CROPED A FACE';'CROP AGAIN'},'BAD
ACTION','warn','modal')
uiwait
else
break
end
close gcf
end
close gcf
cd ('database');
l = length(dir(pwd));
n = [int2str(l-1) '.jpg'];
imwrite(imc,n);
cd ..
while 1
qq = inputdlg('WHAT IS UR NAME?','FILL');
if isempty(qq)
uiwait
else
break
89
end
end
qq = qq{1};
if exist('info.mat','file') == 2
load ('info.mat')
r = size(z2,1);
z2{r+1,1} = {n , qq};
save('info.mat','z2')
else
z2{1,1} = {n,qq};
save('info.mat','z2')
end
else
return
end
end
% --------------------------------------------------------------------
90
global co
if isfield(handles,'vdx')
vid = handles.vdx;
stoppreview(vid)
delete(vid)
handles = rmfield(handles,'vdx');
guidata(hObject,handles)
cla(handles.axes1)
reset(handles.axes1)
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
end
fd = vision.CascadeObjectDetector();
info = imaqhwinfo('winvideo');
did = info.DeviceIDs;
if isempty(did)
return
end
91
did = cell2mat(did);
for k = 1:length(did)
devinfo = imaqhwinfo('winvideo',k);
na(1,k) = {devinfo.DeviceName};
sr(1,k) = {devinfo.SupportedFormats};
end
if b == 0
return
end
if b ~= 0
frmt = sr{1,a};
if b1 == 0
return
end
end
frmt = frmt{a1};
l = find(frmt == '_');
l = find(res == 'x');
92
res1 = str2double(res(1: l-1));
axes(handles.axes1)
vr = [res1 res2];
nbands = get(vid,'NumberofBands');
preview(vid,h2im);
handles.vdx = vid;
guidata(hObject,handles)
pause(1)
delete(tx)
kx = 0;
while 1
im = getframe(handles.axes1);
im = im.cdata;
vo = insertObjectAnnotation(im,'rectangle',bbox,'FACE');
axes(handles.axes2)
imshow(vo)
if size(bbox,1) > 1
93
ACCEPTED'},'WARNING.....!!!','warn','modal')
uiwait
stoppreview(vid)
delete(vid)
handles = rmfield(handles,'vdx');
guidata(hObject,handles)
cla(handles.axes1)
reset(handles.axes1)
set(handles.axes1,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes2,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
return
end
kx = kx + 1;
break
end
end
94
fhx = figure(2);
set(fhx,'menubar','none','numbertitle','off','name','PREVIEW')
imshow(imx)
cd ('database');
l = length(dir(pwd));
n = [int2str(l-1) '.jpg'];
imwrite(imx,n);
cd ..
while 1
qq = inputdlg('WHAT IS UR NAME?','FILL');
if isempty(qq)
uiwait
else
break
end
end
qq = qq{1};
if exist('info.mat','file') == 2
load ('info.mat')
r = size(z2,1);
z2{r+1,1} = {n , qq};
95
save('info.mat','z2')
else
z2{1,1} = {n,qq};
save('info.mat','z2')
end
close gcf
stoppreview(vid)
delete(vid)
handles = rmfield(handles,'vdx');
guidata(hObject,handles)
cla(handles.axes1)
reset(handles.axes1)
set(handles.axes1,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
cla(handles.axes2)
reset(handles.axes2)
set(handles.axes2,'box','on','xtick',[],'ytick',[],'xcolor',[1 1 1],'ycolor',[1 1
1],'color',co,'linewidth',1.5)
% --- Executes on key press with focus on edit1 and none of its controls.
96
% eventdata structure with the following fields (see UICONTROL)
pass = get(handles.edit1,'UserData');
v = double(get(handles.figure1,'CurrentCharacter'));
if v == 8
pass = pass(1:end-1);
set(handles.edit1,'string',pass)
elseif v == 13
p = get(handles.edit1,'UserData');
if strcmp(p,'123') == true
delete(hObject);
delete(handles.pushbutton2)
delete(handles.pushbutton1);
delete(handles.text2);
delete(handles.text3);
delete(handles.text1);
delete(handles.text4);
97
STARTING','HELP....!!!','help','modal')
set(handles.AD_NW_IMAGE,'enable','on')
set(handles.DE_LETE,'enable','on')
set(handles.TRAIN_ING,'enable','on')
set(handles.STA_RT,'enable','on')
set(handles.RESET_ALL,'enable','on')
set(handles.EXI_T,'enable','on')
set(handles.HE_LP,'enable','on')
set(handles.DATA_BASE,'enable','on')
set(handles.text5,'visible','on')
return
else
beep
uiwait;
set(handles.edit1,'string','')
return
end
else
uiwait;
98
set(handles.edit1,'string','')
return
end
set(handles.edit1,'UserData',pass)
set(handles.edit1,'String',char('*'*sign(pass)))
% --------------------------------------------------------------------
f = dir('database');
if length(f) == 2
return
end
l = length(f)-2;
while 1
a = factor(l);
if length(a) >= 4
break
99
end
l = l+1;
end
d = a(1: ceil(length(a)/2));
d = prod(d);
d1 = a(ceil(length(a)/2)+1 : end);
d1 = prod(d1);
zx = sort([d d1]);
figure('menubar','none','numbertitle','off','name','Images of
Database','color',[0.0431 0.5176 0.7804],'position',[300 200 600 500])
for k = 3:length(f)
im = imread(f(k).name);
subplot(zx(1),zx(2),k-2)
imshow(im)
title(f(k).name,'fontsize',10,'color','w')
end
% --------------------------------------------------------------------
100
% handles structure with handles and user data (see GUIDATA)
ff = dir('database');
if length(ff) == 2
for k = 1:100
waitbar(k/100)
pause(0.03)
end
close(h)
return
end
if exist('features.mat','file') == 2
if strcmpi(bx,'yes') == 1
builddatabase
return
else
return
101
end
else
builddatabase
return
end
% --------------------------------------------------------------------
close gcf
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%end%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%
% --------------------------------------------------------------------
102
% hObject handle to ATTENDENCE (see GCBO)
if exist('attendence_sheet.txt','file') == 2
winopen('attendence_sheet.txt')
else
end
% --------------------------------------------------------------------
if exist('attendence_sheet.txt','file') == 2
delete('attendence_sheet.txt')
msgbox('ATTENDENCE DELETED','INFO...!!!','MODAL')
else
end
% --------------------------------------------------------------------
103
function Untitled_1_Callback(hObject, eventdata, handles)
if strcmpi(x,'yes') == 1
delete('attendence_sheet.txt')
delete('features.mat')
delete('info.mat')
cd ([pwd, '\database'])
f = dir(pwd);
for k = 1:length(f)
delete(f(k).name)
end
cd ..
cla(handles.axes1);
reset(handles.axes1);
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
cla(handles.axes2);
reset(handles.axes2);
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
104
0.5176 0.7804],'linewidth',1.5)
set(handles.text5,'string','')
beep
msgbox('All Reset','Info','modal')
end
% --------------------------------------------------------------------
cla(handles.axes1);
reset(handles.axes1);
set(handles.axes1,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
cla(handles.axes2);
reset(handles.axes2);
set(handles.axes2,'box','on','xcolor','w','ycolor','w','xtick',[],'ytick',[],'color',[0.0431
0.5176 0.7804],'linewidth',1.5)
set(handles.text5,'string','')
105
% --------------------------------------------------------------------
% --------------------------------------------------------------------
% --------------------------------------------------------------------
106
107
108