Emotion Detection
Emotion Detection
Emotion Detection
EMOTION DETECTION
Durgesh Kolhe1, Omkar Mandavkar2, Sameer Metkar3, Shubham More4, Prof. Amarja
Adgaonkar5
1-4
Student
5
Assistant Professor
Department of Information Technology
K.C. College of Engineering and Management Studies and Research, Thane
Mumbai, India
I. INTRODUCTION
This is an era of change and innovation; we have seen many technologies and applications of
them that few decades ago people may not have even imagined. Amidst all this comes Data
Science, the gathering of Data, processing it to give outcomes that are beneficial to humankind.
Today Machines are being made intelligent and are used at many places to simply work, but
what separates humans from machines is Emotion .And among the multiple techniques,
algorithms, applications and techniques is one such application of Emotion Recognition. To go
through with this, we decided to use the Convolutional Neural Networks model or recognition
algorithm. To detect Facial Expressions and emotions of the person associated with it We
intend to use CNN on a Dataset, train it and test its accuracy and analyse it. Facial expression
ISSN: 2581-4419 Volume 1 Issue 1
recognition has brought much attention in the past years due to its impact in clinical practice,
sociable robotics and education.
Facial emotion recognition
FER typically has four steps. The first is to detect a face in an image and draw a rectangle
around it and the next step is to detect landmarks in this face region. The third step is extracting
spatial and temporal features from the facial components. The final step is to use a Feature
Extraction (FE) classifier and produce the recognition results using the extracted features.
Figure 1.1 shows the FER procedure for an input image where a face region and facial
landmarks are detected. Facial landmarks are visually salient points such as the end of a nose,
and the ends of eyebrows and the mouth as shown in Figure 1.2. The pairwise positions of two
landmark points or the local texture of a landmark are used as features. Table 1.1 gives the
definitions of 64 primary and secondary landmarks [8]. The spatial and temporal features are
extracted from the face and the expression is determined based on one of the facial categories
using pattern classifiers.
To go through with this, we decided to use the Convolutional Neural Networks model or
recognition algorithm. To detect Facial Expressions and emotions of the person associated with
it We intend to use CNN on a Dataset, train it and test its accuracy and analyse it.
Facial expression recognition has brought much attention in the past years due to its impact in
clinical practice, sociable robotics and education. According to diverse research, emotion plays
an important role in education. Currently, a teacher use exams, questionnaires and observations
as sources of feedback but these classical methods often come with low efficiency. Using facial
expression of students the teacher can adjust their strategy and their instructional materials to
help foster learning of students. The purpose of choosing this Topic is because it has various
applications in multiple fields ranging from preventive healthcare to cyber forensics etc. We
will see about the algorithm and the topic in-general further.
III. DESIGN SYSTEM
One can notice that the center block is repeated 4 times in the design. This architecture is
different from the most common CNN architecture like one used in the blog-post here.
Common architectures uses fully connected layers at the end where most of parameters resides.
Also, they use standard convolutions. Modern CNN architectures such as Xception leverage
from the combination of two of the most successful experimental assumptions in CNNs: the
use of residual modules and depth-wise separable convolutions.
Key Capabilities:
● The main advantage of CNN compared to its predecessors is that it automatically detects
the important features without any human supervision. For example, given many pictures
of cats and dogs, it can learn the key features for each class by itself.
Tested Output:
ISSN: 2581-4419 Volume 1 Issue 1
Figure 4.2
B. Software Requirements
Coding Language: Python 3.9.7 IDE: Visual Studio Code
VI. RESULT
In this chapter, the metrics used to evaluate model performance are defined. Then the best
parameter values for each model are determined from the training results. These values are
used to evaluate the accuracy and loss for CNN models 1 and 2. The results for these models
are then compared and discussed.
where y is a binary indicator (0 or 1), p is the predicted probability and m is the number of
classes (happy, sad, neutral, fear, angry)
ISSN: 2581-4419 Volume 1 Issue 1
Figure 6.1
VII. CONCLUSION
So to conclude we can say that this project surely does the basic general task of Detecting
Emotions if a face is given .It has been proved that self learning algorithm Convolution neural
networks produces good results for naturalistic databases, also best fitted to reduce data over
fitting and data imbalance. Along with that it finds various application areas like healthcare,
virtual reality, robotics etc.
VIII. FUTURE SCOPE
This project has many further applications such as the machine should be able to recognize
deeper emotions and recognize them even if a little bit shaky image is given. Also The project
if properly maintained upgraded and if linked with proper hardware and other software devices
can be used in various situations such as detecting if a person is drunk driving or not or even if
someone is having suicidal thoughts or in some places when someone is being taken
somewhere if they are nervous and are forcefully being taken etc. This can also be used in
Biometric security and can find out if someone is being forced to unlock their device or even
ISSN: 2581-4419 Volume 1 Issue 1
if someone is looking scared and we can set up prompts and alerts and if the user verify or any
other verifying factors are found we can report it to the authorities. This can be used in case of
Domestic Violence etc.
REFERENCES
1) https://ieeexplore.ieee.org/
2) https://www.researchgate.net
3) Exploiting multi-CNN features in CNN-RNN based Dimensional Emotion
Recognition on the OMG in-the-wild Dataset Dimitrios Kollias and Stefanos
Zafeiriou.
4) Real-time emotion recognition on mobile devices Denis Sokolov Mikhail Patkin
5) A Survey on Automatic Emotion Recognition Using Audio Big Data and Deep
Learning Architectures Huijuan Zhao Ning Ye Ruchuan Wang