B2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

Page 2

Table contents

Chapter one ………………………………………………………………x

Introduction ……………………………………………………………….. xi

Abstraction …………………………………………………………………...xii

Objective …………………………………………………………………..xiii

1.1 Fece detection …………………………………………………………… 1


1.2 Related work ……………………………………………………………...1
1.3 Color segmentation ……………………………………………………………………1
Page 3

Chapter two ……………………………………………………………………………………… 2


2.1 Image segmentation …………………………………………………………………..……2
2.2 Image machting ……………………………………………………………………….2
2.2.1 Building Eigenimage Database ……………………………………………………2
2.2.2 Test Image Selection ………………………………………………………………….2
2.2.3 Filtering Non-facial Test Images using Statistical Information ………………2
2.2.4
2.2.5
2.2.6
2.2.7
2.3 Correlation …………………………………...................2
2.4 Distance Compensation ………………………………………………………….2
2.4.1 Filtering Non-facial Test Images using Statistical Information ……………………………..2
2.5
2.5.1
2.5.2
Page 4

Chapter Three ………………………………………………………………………3


3.1 Gender Recognition……………………………………………………………………….3
3.2 Roberts Cross Edge Detection Algorithm ………………………………………….3
3.3 Challenges in face detection………………………………………………………3
3.4 Source code ………………………………………………………………………………3
Page 5

Chapterone

Introduction

In recent years, face recognition has attracted much attention and its research has rapidly expanded
by not only engineers but also neuroscientists, since it has many potential applications in computer
vision communication and automatic access control system. Especially, face detection is an
important part of face recognition as the first step of automatic face recognition. However, face
detection is not straightforward because it has lots of variations of image appearance, such as pose
variation (front, non-front), occlusion, image orientation, illuminating condition and facial
expression.
Many novel methods have been proposed to resolve each variation listed above. For
example, the template-matching methods [1], [2] are used for face localization and detection by
computing the correlation of an input image to a standard face pattern. The feature invariant
approaches are used for feature detection [3], [4] of eyes, mouth, ears, nose, etc. The appearance-
based methods are used for face detection with eigenface [5], [6], [7], neural network [8], [9], and
information theoretical approach [10], [11]. Nevertheless, implementing the methods altogether is
still a great challenge. Fortunately, the images used in this project have some degree of uniformity
thus the detection algorithm can be simpler: first, the all the faces are vertical and have frontal
view; second, they are under almost the same illuminate condition. This project presents a face
detection technique mainly based on the color segmentation, image segmentation and template
matching methods.
Page 6

ABSTRACT
Face affirmation from a video is a standard subject in biometrics investigation. Face affirmation
development has commonly stood apart due to its huge application worth and market potential, for
instance, a nonstop video surveillance structure. It is comprehensively perceived that the face
affirmation has expected a huge activity in perception system as it needn't waste time with the
article's co-action. We plan a persistent face affirmation system subject to IP camera and picture set
figuring by technique for OpenCV and Python programming improvement. The system fuses three
segments: Detection module, planning module, and affirmation module. This paper gives capable
and amazing estimations to constant face recognizable proof and affirmation in complex
establishments. The figurings are executed using a movement of sign planning techniques
including Local Binary Pattern (LBP), Haar Cascade feature. The LBPH figuring is utilized to
evacuate facial features for brisk face ID. The eye revelation count reduces the fake face
distinguishing proof rate. The recognized facial picture is then arranged to address the heading and
addition the separation, along these lines, keeps up high facial affirmation precision. Colossal
databases with faces and non-faces pictures are used to get ready and endorse face revelation and
facial affirmation counts. The estimations achieve a general veritable positive pace of 98.8% for
the face area and 99.2% for right facial affirmation.
Page 7

OBJECTIVE:
Whenever we implement a new system it is developed to remove the shortcomings of the
existing system. The computerized mechanism has the more edge than the manual system. The
existing system is based on manual system which takes a lot of time to get performance of the
work. The proposed system is a web application and maintains a centralized repository of all
related information. The system allows one to easily access the software and detect what he wants.
Page 8

1.1 Fece Detection


Face Detection is a application software to deal with human face. It has the provisions to collect
image from the user so that they can detect the eyes, nose, mouth and whole face of human in the
image. There are various advantages of developing an software using face detection and
recognition in the field of authentication. Face detection is an easy and simple task for humans, but
not so for computers. It has been regarded as the most complex and challenging problem in the
field of computer vision due to large intra-class variations caused by the changes in facial
appearance, lighting and expression. Face detection is the process of identifying one or more
human faces in images or videos. It plays an important part in many biometric, security and
surveillance systems, as well as image and video indexing systems. Face detection can be regarded
as a specific case of object-class detection. In object-class detection, the task is to find the locations
and sizes of all objects in an image that belong to a given class. The project titled ‘Face Detection
and Recognition System’, is to manage all the front end back end system of finding or detecting
particular region in human face. This software helps the people looking for more advanced way of
image processing system. Using this software they can easily find or detect faces in image and also
recognize the face after saving that. Face-detection algorithms focus on the detection of frontal
human faces. It is analogous to image detection in which the image of a person is matched bit by
bit. Image matches with the image stores in database. Any facial feature changes in the database
will invalidate the matching process. A reliable face-detection approach based on the genetic
algorithm and the eigen-face technique. Firstly, the possible human eye regions are detected by
testing all the valley regions in the gray-level image. Then the genetic algorithm is used to generate
all the possible face regions which include the eyebrows, the iris, the nostril and the mouth corners.
Each possible face candidate is normalized to reduce both the lightning effect, which is caused by
uneven illumination; and the shirring effect, which is due to head movement. The fitness value of
each candidate is measured based on its projection on the eigen-faces. After a number of iterations,
all the face candidates with a high fitness value are selected for further verification. At this stage,
the face symmetry is measured and the existence of the different facial features is verified for each
face candidate. Face detection is gaining the interest of marketers. A webcam can be integrated into
a television and detect any face that walks by. The system then calculates the race, gender, and age
range of the face.
Page 9

1.2 Related Works


Facial appearance can brought to recognition using well-established methods. These can be stated
by looking behind the past two to three decades of research and development works. In developing
any automated systems to be able to recognize facial appearance and detect facial features, its
appropriateness depends on its application methods used. However, most commonly all approaches
in development of these types of systems basically falls upon appearance of the face or geometrical
calculation. By using a geometric feature-based approach and maximum recognition using
template matching as claimed [6], the findings obtained resulted in 90 percent accurate recognition.
Haar-like features are evaluated using another image processing that produces an immense amount
of features[7] and uses the AdaBoost boosting method [8] to minimize the enhanced heart
degenerative tree and rapid obstruction classifiers, using only simple rectangular Haar like
features[7] to have different results, such as a set of ad-hoc area efficiency. A list of capabilities
that were terribly enormous will be used in the introduction of a structure that used those features,
and from now on the range of features must be limited to a few essential features obtained by the
boosting technique, AdaBoost [8]. Pentland and Matthew Turk [9] applied Principal Component
Analysis (PCA). Reduction of dimensions of images can also be done with HOG. Facial features or
landmarks are crucial in recognition and detection. The facial landmarks are obtained with HOG
removing unnecessary noise [10]. This procedure insures that the test/samples an image needs not
to be used to its entire image form. Navneet Dalal and Bill Triggs [4] stated in their analysis that
Histograms of Directed Gradient (HOG) descriptors significantly outperform current human
detection feature sets. Small-scale gradients, fine orientation binning, comparatively coarse spatial
binning and high-quality local contrast normalization are all important for successful results

1.3 Color segmentation

Detection of skin color in color images is a very popular and useful technique for face detection.
Many techniques [12], [13] have reported for locating skin color regions in the input image. While
the input color image is typically in the RGB format, these techniques usually use color
components in the color space, such as the HSV or YIQ formats. That is because RGB
components are subject to the lighting conditions thus the face detection may fail if the lighting
condition changes. Among many color spaces, this project used YCbCr components since it is one
of existing Matlab functions thus would save the computation time. In the YCbCr color space, the
luminance information is contained in Y component; and, the chrominance information is in Cb
and Cr. Therefore, the luminance information can be easily de-embedded. The RGB components
were converted to the YCbCr components using the following formula.
Y = 0.299R + 0.587G + 0.114B
Page 10
Cb = -0.169R - 0.332G + 0.500B
Cr = 0.500R - 0.419G - 0.081B
In the skin color detection process, each pixel was classified as skin or non-skin based on
its color components. The detection window for skin color was determined based on the mean and
standard deviation of Cb and Cr component, obtained using 164 training faces in 7 input images.
The Cb and Cr components of 164 faces are plotted in the color space in Fig.1; their histogram
distribution is shown in Fig. 2.

Fig. 1 Skin pixel in YCbCr color space.

Fig.
2 (a) Histogram distribution of Cb. (b) Histogram distribution of Cr.

The color segmentation has been applied to a training image and its result is shown in Fig.
3. Some non-skin objects are inevitably observed in the result as their colors fall into the skin color
space.
Page 11

Fig. 3 Color segmentation result of a training image.


Page 12

Chapter Two
2.1 Image segmentation
The next step is to separate the image blobs in the color filtered binary image into
individual regions. The process consists of three steps. The first step is to fill up black isolated
holes and to remove white isolated regions which are smaller than the minimum face area in
training images. The threshold (170 pixels) is set conservatively. The filtered image followed by
initial erosion only leaves the white regions with reasonable areas as illustrated in Fig. 4.

Fig. 4. Small regions eliminated image.

Secondly, to separate some integrated regions into individual faces, the Roberts Cross Edge
detection algorithm is used. The Roberts Cross Operator performs a simple, quick to compute, 2-D
spatial gradient measurement on an image. It thus highlights regions of high spatial gradients that
Page 13

often correspond to edges. (Fig. 5.)The highlighted region is converted into black lines and eroded
to connect crossly separated pixels.

Fig.5. Edges detected by the Roberts cross operator.

Finally, the previous images are integrated into one binary image and relatively small black
and white areas are removed. The difference between this process and the initial small area
elimination is that the edges connected to black areas remain even after filtering. And those edges
play important roles as boundaries between face areas after erosion. Fig. 6. shows the final binary
images and some candidate spots that will be compared with the representative face templates in
the next step are introduced in Fig. 7.
Page 14

Fig.6. Integrated binary image.

Fig.7. Preliminary face detection with red marks.


Page 15

iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii Image
imatching
2.2 Image machting
Eigenimage Generation
A set of eigenimages was generated using 106 test images which were manually cut from 7 test
images and edited in Photoshop to catch exact location of faces with a square shape. The cropped
test images were converted into gray scale, and then eigenimages were computed using those 106
test images. In order to get a generalized shape of a face, the largest 10 eigenimages in terms of
their energy densities, have been obtained as shown in the Fig. 8. To save computing time, the
information of eigenimages was compacted into one image which was acquired after averaging the
first 9 eigenimages excluding the eigenimage 1, the highest-energy one. The first image was
excluded due to its excessive energy concentration which will eliminate the details of face shapes
that can be shown from other eigenimages from eigenimage 2 to eigenimage 10. The averaged
eigenimage is shown in Fig. 9.

eigenimage 1 eigenimage2 eigenimage 3 eigenimage 4 eigenimage 5

eigenimage 6 eigenimage 7 eigenimage 8 eigenimage 9 eigenimage 10 Fig.8.

Eigenimages
Page 16
Fig.9. Average image using eigenimages

2.2.1 Building Eigenimage Database


In order to save time to magnify or shrink an eigenimage to meet the size of the test image, a group
of eigenimages was stored in the database so that an appropriate eigenimage can be called with
ease without going through image enlarging or shrinking process. The eigenimages were stored in
20 files from 30 pixel-width square image to 220 pixel-width square image with 10-pixel step. The
stored eigenimages were normalized by means of dividing the image matrix by its 2nd norm so that
the effect of eigenimage size does not affect the face detection algorithm.
2.2.2 Test Image Selection
After the color-based segmentation process, skin-colored area can be taken apart as shown in Fig.
6. Given this binary image, a set of small test images needs to be selected and passed to the image
matching algorithm for the further process. The result of image selection solely based on the color
information is shown in Fig. 10. A square box was applied on each segment with the quantified
window size which was selected to meet the size of a face.

Fig. 10. Test Image Selection using Color-Based Image Segmentation.

If Fig. 10 is examined closely, some faces are divided into several pieces, for example the face
being separated into its upper part and neck part as seen in Fig. 11 (a). This is due to the erosion
process which was applied to evade occlusion. To merge these separate areas into one area, box-
Page 17

merge algorithm was used which simply merges two or more adjacent square boxes into one. Since
this phenomenon happens between face and neck part most of times, distance threshold was set
small for horizontal direction, while set large for vertical direction. The results after merging two
boxes in Fig. 11 (a) are shown in Fig. 11 (b). After applying this algorithm, it can be found that
only one box is placed per face most of times in Fig. 12.

(a) before merging process (b) after merging process

Fig. 11. Test Image Selection: Merging of Adjacent Boxes.


Page 18

Fig. 12. Test Image Selection after Applying Box-Merge Algorithm.

2.3 Correlation
The test images selected by an appropriate square window can be passed to the image matching
algorithm. Before the image matching process, the test image need to be converted to gray scale,
and should be divided by the average brightness of the image in order to eliminate the effect of the
brightness of the test image in the process of image matching. Average brightness was defines as
2nd norm of the skin-colored area of the test image. Note that it is not the 2nd norm applied to the
total area of the test image, since the value that we are looking for is not the average brightness of
the test image, but the average brightness of the skin colored parts only.
With the normalized test image, the image matching can be simply accomplished by loading a
correspondent file of eigenimage from the database, then performing correlation of the test image
with respect to the loaded eigenimage. The results of image matching are illustrated in Fig. 13. The
number inside each window means the ranking of the correlation value.
Page 19

Fig. 13. Selected Test Images with Correlation Ranking Information.

2.4 Distance Compensation


Since the figure to be tested will be a group picture, faces in the figure are located close to the each
other in the central area of the figure. However, hands, arms, or legs are relatively located far from
the faces in the figure. Therefore, the mean square distance of a test image with respect to other test
images can be calculated, and then its reciprocal can be multiplied to the correlation value obtained
above, to take the geographical information into account. In other words, a test image which is
located close to the other test images will get larger correlation value, while a test image which is
far from the other group will have smaller correlation value. The ranking of the correlation values
of the test images after this evaluation is shown in Fig. 14.
Page 20

Fig. 14. Correlation Ranking after Geographical Consideration.

2.4.1 Filtering Non-facial Test Images using Statistical Information


The next step is filtering out non-facial test images from the figure. Several approaches have been
taken, but it was not easy to find an absolute threshold value which can be applied to various
pictures with different light condition and composition. Approaches using luminance, average
brightness, and etc., have been tried, but they turned out not to be good enough to set an
appropriate threshold for filtering out non-facial test images. Lastly statistical method was tried. As
seen in the Fig. 15, the histogram of the correlation values after geographical consideration shows
wide distribution of the output values. The leftmost column corresponds to the test images which
have smallest correlation values among the set of the test images.
After filtering out the leftmost column elements, which is 12 test images for this example figure,
Fig. 16 was obtained. Out of 21 faces in the picture, the algorithm has detected 19 faces within
acceptable error of the location of faces. The two undetected faces are partially blocked by the
other faces. Conclusively, this statistical approach, which is seemingly rough estimation, works
great in this picture, and is turned out to produce good results for other pictures as well.
Page 21

Fig. 15. Histogram: Image Matching.

Fig. 16. Correlation Ranking after Geographical Consideration.


Page 22
Results
The test was performed using 7 training images. The results are as followed in Table 1.

Table 1. Face Detection Results using 7 Training Images.


numFaces numHit numRepeat numFalse run time [sec]

Training_1.jpg 21 19 0 0 111

Training_2.jpg 24 24 0 1 101

Training_3.jpg 25 23 0 1 89

Training_4.jpg 24 21 0 1 84

Training_5.jpg 24 22 0 0 93

Training_6.jpg 24 22 0 3 100

Training_7.jpg 22 22 0 1 95

numFaces : total number of faces in the picture


numHit : number of faces successfully detected numRepeat
: number of faces repeatedly detected numFalse :
number of case where a non-face is reported run time : run
time of the face detection routine
* run time measure in Pentium3 700 MHz, 448 Mega Memory; run time includes gender detection algorithm

The face detection algorithm shows 93.3 % of right hit rate, and 0 % of repeat rate, and 4.2 % of
false hit rate. The average run time is 96 seconds.
In order to see if this algorithm works for other than the 7 training images, last year’s sample
picture was test, and the result is as shown in Fig. 17. The results show that 20. out of 24 faces
have been successfully located, and there was no repeat or false detection.
Page 23

Fig. 17. Face Detection of Last Year’s Picture.


Page 24

Chapter Three
3.1 Gender Recognition
Gender recognition algorithm has been implemented to detect at most 3 females in a test photo.
Since the number of females to detect is only 3, average faces of the three females have been
calculated as shown in Fig. 18, and image matching was performed for each of these average faces.

Fig. 18. Average Faces of the Females in the Class.

The test images obtained from Test Image Selection Algorithm which explained in the
previous section were used to match with these three female average faces. The information about
the average faces was stored in the database using the identical method as performed for saving
eigenfaces before.
After running the algorithm for the 7 training image sets, the results are that the average face
image matching method did not detect any female faces at all, but detected the face which has the
largest correlation value for general face detection. However, the inaccuracy of the average face
image matching was expected, because in order to have selectivity of this algorithm, a test image
should be taken very precisely so that it exactly overlaps the eigenface in terms of its center
location and box size. In the real case, the box which defined the contour of a face was bigger or
smaller than it should be, and the center was hardly overlapping either.
More sophisticated algorithm will be required in order to accomplish gender recognition, or further,
the face recognition of a certain character.

3.2 Roberts Cross Edge Detection Algorithm

In theory, the operator consists of a pair of 2×2 convolution masks as shown in Figure 1. One mask
is simply the other rotated by 90°.
Page 25

Fig 1 Roberts Cross convolution masks

These masks are designed to respond maximally to edges running at 45° to the pixel grid, one
mask for each of the two perpendicular orientations. The masks can be applied separately to the
input image, to produce separate measurements of the gradient component in each orientation (call
these Gx and Gy). These can then be combined together to find the absolute magnitude of the
gradient at each point and the orientation of that gradient. The gradient magnitude is given by:
|G | = ( Gx2 + Gy2 ) 1/2

although typically, an approximate magnitude is computed using:

|G | = |Gx | + |Gy |
which is much faster to compute.
The angle of orientation of the edge giving rise to the spatial gradient (relative to the pixel grid
orientation) is given by: θ = arctan (Gy /Gx ) - 3π/4

In this case, orientation 0 is taken to mean that the direction of maximum contrast from black to
white runs from left to right on the image, and other angles are measured anti-clockwise from this.
Often, the absolute magnitude is the only output the user sees. The two components of the
gradient are conveniently computed and added in a single pass over the input image using the
pseudo-convolution operator shown in Figure 2.

Fig 2 Pseudo-Convolution masks used to quickly compute approximate gradient magnitude


Page 26

Using this mask the approximate magnitude is given by:

|G | = |P1 – P4 | + |P2 – P3 |

3.3 Challenges in face detection


Challenges in face detection, are the reasons which reduce the accuracy and detection rate of face
detection. These challenges are complex background, too many faces in images, odd expressions,
illuminations, less resolution, face occlusion, skin color, distance and orientation etc. (Figure 3). •
Odd expressions Human face in an image may have odd expressions unlike normal, which is
challenge for face detection. • Face occlusion Face occlusion is hiding face by any object. It may be
glasses, scarf, hand, hairs, hats and any other object etc. It also reduces the face detection rate. •
Illuminations Lighting effects may not be uniform in the image. Some part of the image may have
very high illumination and other may have very low illumination. • Complex background Complex
background means a lot of objects presents in the image, which reduces the accuracy and rate of
face detection. • Too many faces in the image It means image contains too many human faces,
which is challenge for face detection. • Less resolution Resolution of image may be very poor,
which is also challenging for face detection. • Skin color Skin-color changes with geographical
locations. Skin color of Chinese is different from African and skin-color of African is different
from American and so on. Changing skin-color is also challenging for face detection
Page 27

3.4 Source code


function outFaces = faceDetection(img)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Function 'outFaces' returns the matrix with the
information of % face locations and gender.
%
% outFaces = faceDetection(img)
% img: double formatted image matrix

% coefficients
effect_num=3;
min_face=170;
small_area=15;
imgSize = size(img);

uint8Img = uint8(img);
gray_img=rgb2gray(uint8Img);

% get the image tranformed through YCbCr filter


filtered=ee368YCbCrbin(img,161.9964,-11.1051,22.9265,25.9997,4.3568,3.9479,2);

% black isolated holes rejection


filtered=bwfill(filtered,'holes');

% white isolated holes less than small_area rejection


filtered=bwareaopen(filtered,small_area*10);

% first erosion
filtered = imerode(filtered,ones(2*effect_num));

% edge detection with the Roberts method with sensitivity 0.1


edge_img=edge(gray_img,'roberts',0.1);

% final binary edge image


edge_img=~edge_img;

% integeration of two images, edge + filtered image


filtered=255*(double(filtered) & double(edge_img)); % double

% second erosion
filtered=imerode(filtered,ones(effect_num));

% black isolated holes rejection


filtered=bwfill(filtered,'hole');

% small areas less than the minumum area of face rejection


filtered=bwareaopen(filtered,min_face);

% group labeling in the filtered image


[segments, num_segments] = bwlabel(filtered);
Page 28
% Based on the binary image, squared windows are
generated boxInfo = []; for i = 1:num_segments,
[row col] = find(segments == i);
[ctr, hWdth] = ee368boxInfo(row,
col);
boxInfo = [boxInfo; ctr hWdth];
end

% Overlapping squares are merged


boxInfo = ee368boxMerge(boxInfo, num_segments, imgSize(1));
num_box = length(boxInfo);

% mean squared distance of the boxes wirh respect to others are


calculated boxDist = []; for i = 1:num_box, boxDist = [boxDist;
sqrt((sum((boxInfo(:,1) - boxInfo(i,1)).^2) + ...
sum((boxInfo(:,2) - boxInfo(i,2)).^2))/(num_box-1))]; end

% conversion to 'double' format and 'gray' format


gdOrgImg = double(rgb2gray(uint8Img));
filtered = double(filtered);

outFaces = [];

for k=1:num_box,
ctr = boxInfo(k, 1:2);
hWdth = boxInfo(k,3);

% Based on the box information, images are cut into squares


testImg = ee368imgCut(gdOrgImg, filtered, imgSize, boxInfo(k,:));

% normalized with an average brightness avgBri


= sqrt(sum(sum(testImg.^2))/bwarea(testImg));
testImg = testImg/avgBri;

% test images are compared to the eigen images or femaale average


images corr = ee368imgMatch(testImg, hWdth); fCorr =
ee368imgMatchFe(testImg, hWdth); outFaces = [outFaces; ctr 1
corr/boxDist(k), hWdth, fCorr]; end

% sorting of the correlation


values [Y I] = sort(outFaces(:,4));
outFaces = outFaces(I, :);

% elimination of small correlation values using histogram


B = hist(Y); if B(1) <
0.5*num_box,
outFaces =
outFaces(B(1)+1:end
, :);
end

% results of the correlation with respect to the women's faces


[Fe1, ordFe1] = max(outFaces(:, 6));
Page 29

[Fe2, ordFe2] = max(outFaces(:, 7));


[Fe3, ordFe3] = max(outFaces(:, 8));

outFaces = outFaces(:, 1:3);


outFaces([ordFe1, ordFe2, ordFe3],3) = 2;
Page 30

ee368YCbCrseg.m

function ee368YCbCrseg
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%
%%%% a function for color component analysis %%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%

% training image size


size_x=100;
size_y=100;

folder=['a' 'b' 'c' 'd' 'e' 'f' 'g'];


image_q=[13 17 20 14 11 19 12];

folder_num=size(image_q);

n=0;

% make a YCrCb color matrix set for i=1:folder_num(2) for k = 1:image_q(i)


if k < 10 face =
imread(sprintf('testImages/traincolor_%d/%c0%d.jpg',i,folder(i),k)); else
face = imread(sprintf('testImages/traincolor_%d/%c%d.jpg',i,folder(i),k));
end n=n+1;
RGBface = double(face);
YCbCrface = rgb2ycbcr(RGBface);
YCbCrfaces(:,:,:,n)=YCbCrface;
end end

% discrimination of each component from the color matrix set


[m_YCbCr, n_YCbCr, p_YCbCr, num_YCbCr] = size(YCbCrfaces);
Y = reshape(YCbCrfaces(:,:,1,:),m_YCbCr*n_YCbCr*num_YCbCr,1);
Cb = reshape(YCbCrfaces(:,:,2,:),m_YCbCr*n_YCbCr*num_YCbCr,1);
Cr = reshape(YCbCrfaces(:,:,3,:),m_YCbCr*n_YCbCr*num_YCbCr,1);

% histogram of each component


subplot(131); hist(Y); title('histogram of Y');
subplot(132); hist(Cb); title('histogram of Cb');
subplot(133); hist(Cr); title('histogram of Cr');

clear all;
Page 31

ee368YCbCrseg.m
function
result=ee368YCbCrbin(RGBimage,meanY,meanCb,meanCr,stdY,stdCb,stdCr,factor) %
ee368YCbCrbin returns binary image with skin-colored area white.
%
% Example:
% result=ee368YCbCrbin(RGBimage,meanY,meanCb,meanCr,stdY,stdCb,stdCr,factor)
% RGBimage: double formatted RGB image
% meanY: mean value of Y of skin color
% meanCb: mean value of Cb of skin color
% meanCr: mean value of Cr of skin color
% stdY: standard deviation of Y of skin color
% stdCb: standard deviation of Cb of skin color
% stdCr: standard deviation of Cr of skin color
% factor: factor determines the width of the gaussian envelop.
%
% All the parameters are based on the training facial segments taken from 7 training
images

YCbCrimage=rgb2ycbcr(RGBimage);

% set the range of Y,Cb,Cr


min_Cb=meanCb-
stdCb*factor;
max_Cb=meanCb+stdCb*facto
r; min_Cr=meanCr-
stdCr*factor;
max_Cr=meanCr+stdCr*factor;
% min_Y=meanY-
stdY*factor*2;

% get a desirable binary image with the acquired range


imag_row=size(YCbCrimage,1);
imag_col=size(YCbCrimage,2);

binImage=zeros(imag_row,imag_col);

Cb=zeros(imag_row,imag_col);
Cr=zeros(imag_row,imag_col);

Cb(find((YCbCrimage(:,:,2) > min_Cb) & (YCbCrimage(:,:,2) < max_Cb)))=1;


Cr(find((YCbCrimage(:,:,3) > min_Cr) & (YCbCrimage(:,:,3) < max_Cr)))=1;
binImage=255*(Cb.*Cr);

result=binImage;
Page 32

ee368boxInfo

function [ctr, hWdth] = ee368boxInfo(row, col)


% Given the row and column information of the white area of a binary
image, % this function returns the value of center and width of the
squared window.
%
% Example
% [ctr, hWdth] = ee368boxInfo(row, col)
% row: row cordinates of the white pixels
% col: column cordinates of the white pixels
% ctr: (row cordinate of the center, column cordinate of the center)
% hWdth: half of the size of the window

minR = min(row); maxR = max(row);


minC = min(col); maxC = max(col);

ctr = round([(minR + maxR)/2, (minC + maxC)/2]);

hStp = 5;

if (maxR-minR) > (maxC-minC), hWdth =


round((maxR - minR)/2/hStp)*hStp; else
hWdth = round((maxC - minC)/2/hStp)*hStp;
end
Page 33

ee368boxMerge

function boxInfo = ee368boxMerge(boxInfo, num_segments, nRow) %


Given the information of the squared windows, this function merges
superpo-
% sing squares. Additioally, this function also rejects too small
or large % windows. % % Example:
% boxInfo = ee368boxMerge(boxInfo, num_segments, nRow)
% boxInfo: center coordinates and half widths of the boxes
% num_segments: number of segments or boxes
% rRow: total number of rows of the given image
% boxInfo: newly generated boxInfo

rGapTh =
70; cGapTh
= 25; hStp =
5; rThr =
200;
adjBoxCor = [];

for i = 1: num_segments-1, rGap = boxInfo(i+1:end, 1) -


boxInfo(i, 1); cGap = boxInfo(i+1:end, 2) - boxInfo(i, 2);
rCandi = find((abs(rGap) < rGapTh) & (abs(cGap) <
cGapTh)); adjBoxCor = [adjBoxCor;
i*ones(length(rCandi),1) i+rCandi]; end

numAdj = size(adjBoxCor,1);

for j = 1: numAdj, fstPnt


= adjBoxCor(j,1); fstCtrR
= boxInfo(fstPnt, 1);
fstCtrC = boxInfo(fstPnt,
2);
fstHwd = boxInfo(fstPnt, 3);
Page 34

sndPnt = adjBoxCor(j,2);
sndCtrR = boxInfo(sndPnt, 1);
sndCtrC = boxInfo(sndPnt, 2);
sndHwd = boxInfo(sndPnt, 3);

if fstCtrR-fstHwd < sndCtrR-sndHwd,


rTop = fstCtrR-fstHwd;
else
rTop = sndCtrR-sndHwd;
end

if fstCtrR+fstHwd > sndCtrR+sndHwd,


rBot = fstCtrR+fstHwd;
else
rBot = sndCtrR+fstHwd;
end

if fstCtrC-fstHwd < sndCtrC-sndHwd,


cLeft = fstCtrC-fstHwd;
else
cLeft = sndCtrC-sndHwd;
end

if fstCtrC+fstHwd > sndCtrC+sndHwd,


cRight = fstCtrC+fstHwd;
else
cRight = sndCtrC+sndHwd;
end

ctr = round([(rTop+rBot)/2, (cLeft+cRight)/2]);

if rBot-rTop > cRight-cLeft,


hWdth = round((rBot-rTop)/2/hStp)*hStp;
else
hWdth = round((cRight-cLeft)/2/hStp)*hStp;
end

boxInfo(sndPnt, :) = [ctr, hWdth];


boxInfo(fstPnt, :) = [ctr, hWdth]; % added line
end

adjBoxCor2 = adjBoxCor;

for i = 2: size(adjBoxCor, 1), if


adjBoxCor(i,1) == adjBoxCor(i-1,1),
adjBoxCor2(i,1) = adjBoxCor(i-1, 2);
end end

boxInfo(adjBoxCor2(:,1), :) = [];
boxInfo(find(boxInfo(:,1) > nRow-rThr), :) = [];
Page 35

ee368imgCut

function testImg = ee368imgCut(img, filtered, imgSize, boxInfo);


% This function returns a squared test image with a standardized image size.
% The output image is masked with a binary image so that non-facial area
% is blacked out. In addition, this program also compensates for the
image % which is adjacent to the edge of the picture to make it square
formed by % means of filling out zeros.
%
% Example
% testImg = ee368imgCut(img, filtered, imgSize, boxInfo)
% img: double formatted test image
% filtered: binary image which contains mask information
% imgSize: the size of img
% boxInfo: information of center point and width of a box
% testImg: square shaped segment to be tested in image matching algorithm

nRow = imgSize(1);
nCol = imgSize(2);
ctr = boxInfo(1:2);
hWdth = boxInfo(3);

testImg = img(abss(ctr(1)-hWdth): abss2(ctr(1)+hWdth-1, nRow), abss(ctr(2)-hWdth): abss2(ctr(2)+hWdth-1,


nCol)); maskImg = filtered(abss(ctr(1)-hWdth): abss2(ctr(1)+hWdth-1, nRow), abss(ctr(2)-hWdth):
abss2(ctr(2)+hWdth-1, nCol)); testImg = testImg.*maskImg;

[nRowOut, nColOut] = size(testImg);

if ctr(1)-hWdth-1 < 0, % image is sticking out to the top


testImg = [zeros(-ctr(1)+hWdth+1, nColOut); testImg]; elseif
ctr(1)+hWdth-1 > nRow, % image is sticking out to the
bottom testImg = [testImg; zeros(ctr(1)+hWdth-nRow-1,
nColOut)]; end
Page 36
if ctr(2)-hWdth -1 < 0, % image is sticking out to the left
testImg = [zeros(2*hWdth, -ctr(2)+hWdth+1), testImg];
elseif ctr(2)+hWdth-1 > nCol, % image is sticking out to
the right testImg = [testImg zeros(2*hWdth,
ctr(2)+hWdth-nCol-1)]; end

ee368imgMatch

function corr = ee368imgMatch(testImg,


hWdth) % This function correlates a given test
image with % a reference image which has a
tailored image size % and is called from the
database.
%
% Example
% corr = ee368imgMatch(testImg, hWdth)
% testImg: square shaped test image
% hWdth: half of the width of the testImg
% corr: correlation value

lowThr = 30;
higThr = 220;
wdth =
2*hWdth;
if wdth < lowThr,
corr = 0; elseif wdth > higThr, corr = 0;
else eval(['load eigFace',
num2str(wdth)]); corr
=reshape(testImg, wdth^2, 1)'*eigFace;
end
Page 37

ee368imgMatchFe

function fCorr = ee368imgMatchFe(testImg, hWdth)


% Image matching for a female image

lowThr = 40; higThr = 220; wdth =


2*hWdth; if wdth < lowThr, fCorr =
[0 0 0]; elseif wdth > higThr, fCorr =
[0 0 0]; else eval(['load fImg1_',
num2str(wdth)]); eval(['load fImg2_',
num2str(wdth)]); eval(['load fImg3_',
num2str(wdth)]); fImg = [fImg1
fImg2 fImg3]; fCorr
=reshape(testImg, 1, wdth^2)*fImg;
end

abss. m
function val=abss(inp);
% Function 'abss' prevents the coordinate of test image
% exceeds the boundary of the original image
Page 38
if(inp>1)
val=inp;
else
val=1;
end

abss2. m
function val=abss2(inp, thr);
% Function 'abss' prevents the coordinate of test image
% exceeds the boundary of the original image

if(inp>thr)
val=thr;
else
val=inp;
end
Page 39

Chapter Four
4.1 TRAINING IN OPENCV
In OpenCV, training refers to providing a recognizer algorithm with training data to learn from.
The trainer uses the same algorithm (LBPH) to convert the images cells to histograms and then
computes the values of all cells and by concatenating the histograms, feature vectors can be
obtained. Images can be classified by processing with an ID attached. Input images are classified
using the same process and compared with the dataset and distance is obtained. By setting up a
threshold, it can be identified if it is a known or unknown face.Eigenface and Fisherface compute
the dominant features of the whole training set while LBPH analyses them individually. To do so,
firstly, a Dataset is created. You can either create your own dataset or start with one of the
available face databases. •Yale Face Database •AT & T Face Database The .xml or .yml
configuration file is made from the several features extracted from your dataset with the help of the
FaceRecognizer Class and stored in the form of feature vectors.
4.2 TRAINING THE CLASSIFIERS
OpenCV enables the creation of XML files to store features extracted from datasets using the
FaceRecognizer Class. The stored images are imported, converted to Grayscale and saved with IDs
in two lists with same indexes. Face Recognizer obj ects are created using FaceRecognizer class.
Each recognizer can take in parameters described below. cv2.face.createEigenFaceRecognizer()
1.Takes in the number of components for the PCA for crating Eigenfaces. OpenCV documentation
mentions 80 can provide satisfactory reconstruction capabilities. 21 2. Takes in the threshold in
recognising faces. If the distance to the likeliest Eigenface is above this threshold, the function will
return a -1, that can be used state the face is unrecognisable cv2.face.createFisherFaceRecognizer()
1. The first argument is the number of components for the LDA for the creation of Fisherfaces.
OpenCV mentions it to be kept 0 if uncertain. 2. Similar to Eigenface threshold. -1 if the threshold
is passed. cv2.face.createLBPHFaceRecognizer() 1. The radius from the centre pixel to build the
local binary pattern. 2. The Number of sample points to build the pattern. Having a considerable
number will slow down the computer. 3. The Number of Cells to be created in X axis. 4. The
number of cells to be created in Y axis. 5. A threshold value similar to Eigen face and Fisherface.
if the threshold is passed the object will return 1.Recogniser objects are created and images are
imported, resized, converted into numpy arrays and stored in a vector. The ID of the image is
gathered from splitting the file name, and stored in another vector.By using
FaceRecognizer.train(NumpyImage, ID) all three of the objects are trained. It must be noted that
resizing the images were required only for Eigenface and Fisherface, not for LBPH. The
configuration model is saved as XML using the function: FaceRecognizer.save(FileName).
cognizer class. The stored images are imported, converted to grayscale and saved with IDs in two
listswith same indexes. FaceRecognizer objects are created using face recogniser class.

22 4.3 .train() FUNCTION


Page 40
Trains a FaceRecognizer with given data and associated labels. Parameters: src The training
images, that means the faces you want to learn. The data has to be given as a vector. labels The
labels corresponding to the images have to be given either as a vector or any other data type. 4
Page 41

4.2 CONCLUSION

This paper portrays the smaller than usual undertaking for visual discernment and
independence module. Next, it clarifies the advances utilized in the venture and the procedure
utilized. At last, it shows the outcomes, talks about the difficulties and how they were settled
trailed by a conversation. Utilizing Haar-falls for face recognition worked amazingly well in any
event when subjects wore exhibitions. Ongoing video speed was agreeable also without observable
casing slack. Thinking about all elements, LBPH joined with Haar-falls can be actualized as a cost
e ective face acknowledgment stage. A model is a framework to distinguish known troublemakers
in a shopping center or a market to give the proprietor an admonition to keep him alert or for
programmed participation taking in a class.
Page 42

References
[1] I. Craw, D. Tock, and A. Bennett, “Finding face features,” Proc.of 2nd European Conf.
Computer Vision. pp. 92-96, 1992.
[2] A. Lanitis, C. J. Taylor, and T. F. Cootes, “An automatic face identification system using
flexible appearance models,” Image and Vision Computing, vol.13, no.5, pp.393-401, 1995.
[3] T. K. Leung, M. C. Burl, and P. Perona, “Finding faces in cluttered scenes using random
labeled graph matching,” Proc. 5th IEEE int’l Conf. Computer Vision, pp. 637-644, 1995.
[4] B. Moghaddam and A. Pentland, “Probabilistic visual learning for object recognition,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 19, no.7. pp. 696-710, July, 1997.
[5] M. Turk and A. Pentland, “Eigenfaces for recognition,” J. of Cognitive Neuroscience, vol.3,
no. 1, pp. 71-86, 1991.
[6] M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve procedure for the
characterization of human faces,” IEEE Trans. Pattern Analysis and Machine Intelligence,
vol.12, no.1, pp. 103-108, Jan. 1990.
[7] I. T. Jolliffe, Principal component analysis, New York: Springer-Verlag, 1986.
[8] T, Agui, Y. Kokubo, H. Nagashi, and T. Nagao, “Extraction of face recognition from
monochromatic photographs using neural networks,” Proc. 2nd Int’l Conf. Automation,
Robotics, and Computer Vision, vol.1, pp. 18.81-18.8.5, 1992.
[9] O. Bernier, M. Collobert, R. Feraud, V. Lemaried, J. E. Viallet, and D. Collobert,
“MULTRAK: A system for automatic multiperson localization and tracking in real-time,”
Proc, IEEE. Int’l Conf. Image Processing, pp. 136-140, 1998.
[10] A. J. Colmenarez and T. S. Huang, “Face detection with information-based maximum
discrimination,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 782-787,
1997.
[11] M. S. Lew, “Information theoretic view-based and modular face detection,” Proc. 2 nd Int’l
Conf. Automatic Face and Gesture Recognition, pp. 198-203, 1996.
[12] H. Martin Hunke, Locating and tracking of human faces with neural network, Master’s
thesis, University of Karlsruhe, 1994.
[13] Henry A. Rowley, Shumeet Baluja, and Takeo Kanade. “Neural network based face
detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(I), pp.23-
38, 1998.
Page 43
Page 44

You might also like