Boosted Convolutional Neural Network For Real Time Facial Expression Recognition
Boosted Convolutional Neural Network For Real Time Facial Expression Recognition
Boosted Convolutional Neural Network For Real Time Facial Expression Recognition
Abstract—Facial expression recognition systems have attracted but nothing in the success of BCNN, our aim to train the weak
research interest in the field of artificial intelligence. Facial classifier to form into stronger to get highest performance.
expression is an important channel for human communication With the power of computers today and the current
and can be applied in many real-life applications. For this
project we developed Facial expression recognition system, and breakthrough in technologies, there are now various meth-
it is implemented using Convolution Neural Network (CNN). ods/algorithms that were developed to enable a com-
CNN model of the project is based on combination of different puter/machine to perform tasks such as face detection with
activation function to improve the performance. Kaggle facial emotion features. In this paper, the objective will be on
expression dataset with seven facial expression labels as happy, how to develop real time facial expression recognition using
sad, surprise, fear, anger, disgust, and neutral is used in this
project. The best combination of activation function achieved boosted convolutional neural network. This paper is going to
95.23 % accuracy on training set and 52% on testing set. implement boosted convolutional neural network for the weak
classifier to form a strong classifier.
Index Terms—Facial Expression Recognition, Machine Learn-
ing, Deep Learning, Convolutional Neural Network, Boosted In this paper facial expression recognition system is going to
Convolutional Neural Network, Computer Vision. be implemented using boosted convolutional neural network.
Facial images are classified into six facial expression labels
namely angry, fear, disgust, happy, sad, surprise, and neutral.
I. I NTRODUCTION The data will be from Kaggle data set (FER2013) that will be
used to train and test the classifier.
Convolutional neural network includes six components such 6) Softmax layer: The error of the network is propagated
as convolutional layer, Sub-sampling layers, Rectified linear back through a softmax layer. If N be the size of the input
unit (ReLU), Fully connected layer, Output layer and Softmax vector, a mapping can be calculates by softmax such that:
layer [10]. S(x) : R[0, 1]N , and each components of the softmax layer
1) Convolutional Layer: Convolutional layers can be de- is calculated as follows:
termined by the number of generated maps and kernel’s size.
The kernel is moved over the valid area of the given image
(perform a convolution) for generating the map, output of the
layers can be calculates as follows :
Where 1 <= j <= N
III. M ETHODOLOGY
The main aim of this paper is to implement an efficient
method to detect the face, emotion of the person in real time
and improve the performance.
5) Output layer: The output layer represent class of the IV. A NALYSIS
input image which its size equal to number of classes. Output We built and train our model with a normal convolutional
vector x produce resulting class as follows: neural network CNN. This network had four convolutional
layers and one FC layer. In the first convolutional layer, we had
32 3x3 filters, with the stride of size 1, along with batch nor-
malization and dropout, and max pooling of 2x2 alongside we
used Rectified Linear Unit (ReLU) as the activation function.
3
V. R ESULTS
To compare the performance of the model with the com-
bination of different activation function, we plotted the loss
history and the obtained accuracy in these models. Figures 2
and 3 exhibit the results. As seen in Figure 3, the combination
of Rectified Linear Unit (ReLU) and SoftMax enabled us to Fig. 3. Confusion Matrix with the combination of ReLu and Tanh
increase the validation accuracy by 31.50% compare to other
4
Fig. 4. Loss and Accuracy of training and validation of the model with
combination of ReLu and SoftMax
Fig. 5. Loss and Accuracy of training and validation of the model with
combination of ReLu and Tanh
VI. C ONCLUSION
We develop various cnn model with many different combi-
nation of activation function for a facial expression recognition
problem and evaluate their performance using different post-
processing and visualization techniques. The results demon-
strated that the combination of Rectified Linear Unit (ReLU)
and SoftMax activation function with cnn model are capable of
learning facial characteristic and improving facial expression
detection.
R EFERENCES
[1] Z. Liu, H. Wang, Y. Yan, and G. Guo. Effective Facial Expression
Recognition via the Boosted Convolutional Neural Network. Springer-
Verlag, Berlin Heidelberg, 2015.
[2] A. Raghuvanshi, and V. Choksi. Facial Expression Recognition with
Convolutional Neural Networks. Stanford University, 2016.
[3] D. Yang, T. Kunihiro, H. Shimoda, and H. Yoshikawa. A Study
of Real-time Image Processing Method for Treating Human Emotion
by Facial Expression. IEEE SMC’99 Conference Proceedings. 1999
IEEE International Conference on Systems, Man, and Cybernetics (Cat.
No.99CH37028), Tokyo, Japan, 1999, pp. 360-364 vol.2. doi: 10.1109/IC-
SMC.1999.825285.
[4] J. Zhu, and Z. Chen. Real Time Face detection System Using Ad-
aboost and Haar-like Features. Shanghai, 2015, pp. 404-407. doi:
10.1109/ICISCE.2015.95
[5] Y. Wang, H. AI, B. Wu, and C. Huang. Real Time Facial Expression
Recognition with Adaboost. Proceedings of the 17th International Con-
ference on Pattern Recognition, 2004. ICPR 2004., Cambridge, 2004, pp.
926-929 Vol.3. doi: 10.1109/ICPR.2004.1334680
[6] S. Alizadeh, and A. Fazel Convolutional Neural Networks for Facial
Expression Recognition. Standford University, 2017. arXiv:1704.06756v1
[7] S. Mukherjee, S.Saha, S. Lahiri, A. Das, A. Kumar Bhunia, A. Konwer,
and A. Chakraborty. Convolutional Neural Network based Face detection.
2017, 1st International Conference on Electronics, Materials Engineer-
ing and Nano-Technology (IEMENTech), Kolkata, 2017, pp. 1-5. doi:
10.1109/IEMENTECH.2017.8076987