"Mood Detection Using Artificial Intelligence And: Machine Learning
"Mood Detection Using Artificial Intelligence And: Machine Learning
"Mood Detection Using Artificial Intelligence And: Machine Learning
PROJECT REPORT ON
SUBMITTED BY
CERTIFICATE
This is to certify that the project report entitled “Mood Detection Using
Artificial Intelligence and Machine Learning” has been successfully
completed by
1. Ashish Sahebrao Shinde (EXAM SEAT NO)
2. Gaurav Nitin Sonawane (EXAM SEAT NO)
3. Tejas Sandeep Sonawane (EXAM SEAT NO)
4. Vishal Punjaram More (EXAM SEAT NO)
1 Introduction 1
2 Literature Survey 2
3 Scope of Project 5
7 Source Code 20
8 Testing 26
9 Conclusion 29
10 Future Scope 30
11 References 31
List of the figures: -
4.1 Python 8
6.8 Execution 1 17
6.9 Execution 2 18
6.10 Results 19
List of the Tables: -
ABSTRACT
Behaviours, actions, poses, facial expressions and speech; these are considered
as channels that convey human emotions. Extensive research has been carried
out to explore the relationships between these channels and emotions. This
paper proposes a prototype system which automatically recognizes the emotion
Represented on a face. Thus, a neural network-based solution combined with
image processing is used in classifying the universal emotions: Happiness,
Sadness, Anger, Disgust, Surprise and Fear. Coloured frontal face images are
given as input to the prototype system. After the face is detected, image
processing-based feature point extraction method is used to extract a set of
selected feature points. Finally, a set of values obtained after processing those
extracted feature points are given as input to the neural network to recognize the
emotion contained.
Computational analysis of emotions has been considered a challenging and
Interesting task. Researchers rely on various cues such as physiological sensors
And facial expressions to identify human emotions. However, there are few prior
works who work with textual input to analyses these emotions. This survey
attempts to summarize these diverse approaches, datasets and resources that have
been reported for emotional analysis from text. We feel that there is an essential
need to have a collective understanding of the research in this area.
Therefore, we report trends in emotional analysis research. We also present a
research matrix that summarizes past work, and list pointers to future work.
Mood Detection Using Artificial Intelligence and Machine
Introduction
1.1 Overview
The dual fears of identity theft and password hacking are now becoming a reality, where the
only hope of a secure method for preserving data are behavioral systems. Systems which are based on
user behavior are usually understood as behavioral systems. Behavioral traits are almost impossible to
steal. Multiple commercial, civilian, and government entities have already started using behavioral
biometrics to secure sensitive data. One of the major components of behavioral biometrics is the
recognition of facial emotion and its intensity. In the industry and academic research, physiological
traits have been used for identification through biometrics. Any level of biometrics could not be
performed without good sensors, and when it comes to facial emotion intensity recognition, apartfrom
high-quality sensors (cameras), there is a need for efficient algorithms to recognize emotional
intensity in real time.
With the increased use of images over the past decade, the automated facial analytics such as
facial detection, recognition, and expression recognition along with its intensity has gained
importance and are useful in security and forensics. Components such as behavior, voice, posture,
vocal intensity, and emotion intensity of the person depicting the emotion, when combined, help in
measuring and recognizing various emotions. With the advent of modern technology our desires went
high and it binds no bounds. In the present era a huge research work is going on in the field of
digital image and image processing.
The way of progression has been exponential and it is ever increasing. Image Processing
is a vast area of research in present day world and its applications are very widespread. Image
processing is the field of signal processing where both the input and output signals are images.
One of the most important application of Image processing is Facial expression recognition. Our
emotion is revealed by the expressions in our face. Facial Expressions plays an important role in
interpersonal communication. Facial expression is a nonverbal scientific gesture which gets
expressed in our face as per our emotions. Automatic recognition of facial expression plays an
important role in artificial intelligence and robotics and thus it is a need of the generation. Some
application related to this include Personal identification and Access control, Videophone and
Teleconferencing, Forensic application, Human-Computer Interaction, Automated Surveillance,
Cosmetology and so on.
1
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Literature survey
2
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
6 Movie emotional 2015 Movie is one of
event detection based on International the most important
music mood and video Conference on entertainments in the
tempo Technologies for history of mankind. But as
Sustainable interests of viewers are
Development various, it's difficult to deal
(ICTSD) with their requirements by
only one principle.
7 Mood Detection and 2009 5th Studies show that mood
Prediction Based on User International states influence our daily
Daily Activities Colloquium on life quality and activities,
Signal Processing & and this is not the only
Its Applications way around.
3
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Existing System:
A mood-sensing approach for classical music from acoustic data. Thayer's humor model
is adopted for the taxonomy of Humor and three sets of actual characteristics are extracted
directly from the acoustic data that represent intensity, timbre and rhythm, respectively. A
hierarchical framework is used in a music clip. To detect mood in a complete piece of music, a
segmentation scheme is presented to monitor mood. This algorithm achieves a satisfactory
precision in experimental evaluations. The feature extraction method is the SIFT characteristics.
In SIFT functions they are effective for describing the finer edges and appearance characteristics.
Because of deformations corresponding to facial expressions are mainly in the form of lines and
wrinkles. The classification method provided in this document is neural networks. In the artificial
neutral networks are more suitable to face the problem of the recognition of emotions from the
units of action, since these techniques emulate processes of solving unconscious human
problems.
4
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Scope of Project
Significant debate has risen in past regarding the emotions portrayed in the world -famous
masterpiece of Mona Lisa. British Weekly New Scientist‟ has stated that she is in fact a blend of
many different emotions, 83%happy, 9% disgusted, 6% fearful, 2% angry. We have also been
motivated observing the benefits of physically handicapped people like deaf and dumb. But if any
normal human being or an automated system can understand their needs by observing their facial
expression then it becomes a lot easier for them to make the fellow human or automated system
understand their needs.
In the existing situation people are facing many emotional issues through which they go
under enormous depression and stress it could be because of different issues related to Facial or
personal issues so to reduce this we are going to make a system which would try to relax them
from these emotional issues.
These Human facial expressions convey a lot of information visually rather than
articulately. Facialexpressionrecognitionplaysacrucialroleinthearea ofhuman-machine
interaction. Automatic facial expression recognition system has many applications including, but
not limited to, human behavior understanding, detection of mental disorders, and synthetic
human expressions. Recognition of facial expression by computer with high recognition rate is
still a challenging task. Two popular methods utilized mostly in the literature for the automatic
FER systems are based on geometry and appearance. Facial Expression Recognition usually
performed in four-stages consisting of pre-processing, face detection, feature extraction, and
expression classification. In this project we applied various deep learning methods
(convolutional neural networks) to identify the key seven human emotions: anger, disgust, fear,
happiness, sadness, surprise andneutrality.
5
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Requirement Analysis and Specifications
The requirement analysis includes the Functional and Non-Function requirements, and
Hardware and Software requirements. The non-functional requirements include feasibility, reliability,
scalability. There are no any extra hardware e requirements. The software requirements include the
programming languages like Python.
Maintainability: Human resources is not required to maintain the components and collect the
raw data from each of the components.
Reusability: The components are compatible for changing environment and supports
upgradeability.
Availability: The system is functional throughout and data transfer takes place only when
user requests.
Usability: The system is user friendly as it uses a simple networking model like a ZigBee.
Reliability: The system is highly consistent and reliable.
PC/Laptop:
i7 Processor
8 GB RAM (Min)
1 TB HDD
Python
PyCharm
6
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 4.1 Python
7
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Methodology and Algorithm
The proposed system will introduce chatbot which is nothing but a computer program. The
conversation via auditory or textual methods is conducted by this computer program which nothing
other than chatbot. The bot chats with person in such a way that it never makes person understand that
it’s actually the computer with he is chatting. There will be an automatic interface for jokes and songs
as per the users’ mood. The system will able to detect stress and on detecting the stress some
inspirational quotes will pop on the screen. And also, system will able to provide some links to web
pages of motivational speech. The data provide by system will boost the mood which make the user
to work efficiently and leads to enhancement in performance.
8
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
These Human facial expressions convey a lot of information visually rather than
articulately. Facialexpressionrecognitionplaysacrucialroleinthearea of human-
machineinteraction. Automatic facial expression recognition system has many applications
including, but not limited to, human behavior understanding, detection of mental disorders, and
synthetic human expressions. Recognition of facial expression by computer with high
recognition rate is still a challenging task. Two popular methods utilized mostly in the literature
for the automatic FER systems are based on geometry and appearance. Facial Expression
Recognition usually performed in four-stages consisting of pre-processing, face detection,
feature extraction, and expression classification. In this project we applied various deep learning
methods (convolutional neural networks) to identify the key seven human emotions: anger,
disgust, fear, happiness, sadness, surprise andneutrality.
9
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Modelling and Designing
10
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 6.2 Use Case diagram of Proposed System
11
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 6.3 DFD Level 0 for proposed system
13
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
SCREENSHOTS
14
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.7 neutral face
15
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.8 Execution 1
16
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.9 Execution 2
17
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.10 Results
18
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Chapter 7
Source Code
import matplotlib
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib.pyplot as plt
import numpy as np
import json
import subprocess
from sklearn import datasets
import FileDialog # Needed for Pyinstaller
import sys
if sys.version_info[0] < 3:
import Tkinter as Tk
else:
import tkinter as Tk
print(_doc_)
faces = datasets.fetch_olivetti_faces()
#
==================================================================
========
# Traverses through the dataset by incrementing index & records the result
#
==================================================================
========
class Trainer:
def _init_(self):
self.results = {}
self.imgs = faces.images
self.index = 0
def reset(self):
print "============================================"
print "Resetting Dataset & Previous Results.. Done!"
print "============================================"
self.results = {}
self.imgs = faces.images
self.index = 0
19
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def increment_face(self):
if self.index + 1 >= len(self.imgs):
return self.index
else:
while str(self.index) in self.results:
# print self.index
self.index += 1
return self.index
# ===================================
# Callback function for the buttons
# ===================================
## smileCallback() : Gets called when "Happy" Button is pressed
## noSmileCallback() : Gets called when "Sad" Button is pressed
## updateImageCount() : Displays the number of images processed
## displayFace() : Gets called internally by either of the button presses
## displayBarGraph(isBarGraph) : computes the bar graph after classification is
completed 100%
## _begin() : Resets the Dataset & Starts from the beginning
## _quit() : Quits the Application
## printAndSaveResult() : Save and print the classification result
## loadResult() : Loading the previously stored classification result
## run_once(m) : Decorator to allow functions to run only once
def run_once(m):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return m(*args, **kwargs)
wrapper.has_run = False
return wrapper
def smileCallback():
trainer.record_result(smile=True)
trainer.increment_face()
displayFace(trainer.imgs[trainer.index])
updateImageCount(happyCount=True, sadCount= False)
20
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def noSmileCallback():
trainer.record_result(smile=False)
trainer.increment_face()
displayFace(trainer.imgs[trainer.index])
updateImageCount(happyCount=False, sadCount=True)
@run_once
def displayBarGraph(isBarGraph):
ax[1].axis(isBarGraph)
n_groups = 1 # Data to plot
Happy, Sad = (sum([trainer.results[x] == True for x in trainer.results]),
sum([trainer.results[x] == False for x in trainer.results]))
index = np.arange(n_groups) # Create Plot
bar_width = 0.5
opacity = 0.75
ax[1].bar(index, Happy, bar_width, alpha=opacity, color='b', label='Happy')
ax[1].bar(index + bar_width, Sad, bar_width, alpha=opacity, color='g', label='Sad')
ax[1].set_ylim(0, max(Happy, Sad)+10)
ax[1].set_xlabel('Expression')
21
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
ax[1].set_ylabel('Number of Images')
ax[1].set_title('Training Data Classification')
ax[1].legend()
@run_once
def printAndSaveResult():
print trainer.results # Prints the results
with open("../results/results.xml", 'w') as output:
json.dump(trainer.results, output) # Saving The Result
@run_once
def loadResult():
results = json.load(open("../results/results.xml"))
trainer.results = results
def displayFace(face):
ax[0].imshow(face, cmap='gray')
isBarGraph = 'on' if trainer.index+1 == len(faces.images) else 'off' # Switching Bar
Graph ON
if isBarGraph is 'on':
displayBarGraph(isBarGraph)
printAndSaveResult()
# f.tight_layout()
canvas.draw()
def _opencv():
print "\n\n Please Wait......."
opencvProcess = subprocess.Popen("Train Classifier and Test Video Feed.py",
close_fds=True, shell=True)
# os.system('"Train Classifier.exe"')
# opencvProcess.communicate()
def _begin():
trainer.reset()
global HCount, SCount
HCount = 0
SCount = 0
updateImageCount(happyCount=False, sadCount=False)
displayFace(trainer.imgs[trainer.index])
22
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def _quit():
root.quit() # stops mainloop
root.destroy() # this is necessary on Windows to prevent
# Fatal Python Error: PyEval_RestoreThread: NULL tstate
if _name_ == "_main_":
# Embedding things in a tkinter plot & Starting tkinter plot
matplotlib.use('TkAgg')
root = Tk.Tk()
root.wm_title("Emotion Recognition Using Scikit-Learn & OpenCV")
# =======================================
# Class Instances & Starting the Plot
# =======================================
trainer = Trainer()
# ax tk.DrawingArea
# Embedding the Matplotlib figure 'f' into Tkinter canvas
canvas = FigureCanvasTkAgg(f, master=root)
canvas.show()
canvas.get_tk_widget().pack(side=Tk.TOP, fill=Tk.BOTH, expand=1)
labelVar = Tk.StringVar()
label = Tk.Label(master=root, textvariable=labelVar)
23
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
imageCountString = "Image Index: 0/400 [0 %]" # Initial print
labelVar.set(imageCountString)
label.pack(side=Tk.TOP)
countVar = Tk.StringVar()
HCount = 0
SCount = 0
countLabel = Tk.Label(master=root, textvariable=countVar)
countString = "(Happy: 0 Sad: 0)\n" # Initial print
countVar.set(countString)
countLabel.pack(side=Tk.TOP)
authorVar = Tk.StringVar()
authorLabel = Tk.Label(master=root, textvariable=authorVar)
authorString = "\n\n Developed By: " \
"\n Saurabh Dashpute and Kunal Sonawne " \
"\n (Kunal957ss@gmail.com) " \
"\n [Final Year Project - 2021]" # Initial print
authorVar.set(authorString)
authorLabel.pack(side=Tk.BOTTOM)
root.iconbitmap(r'..\icon\happy-sad.ico')
Tk.mainloop() # Starts mainloop required by Tk
24
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Testing
Unit testing:
It a level of software testing where individual units/components of a software are tested. The purpose
is to validate that each unit of the software performs as designed. A unit is the smallest testable part of
any software. It usually has one or a few inputs and usually a single output. In procedural
programming, a unit may be an individual program, function, procedure, etc. In object-oriented
programming, the smallest unit is a method, which may belong to a base/ super class, abstract class or
derived/ child class. (Some treat a module of an application as a unit. This is to be discouraged as
there will probably be many individual units within that module. Unit testing frameworks, drivers,
stubs, and mock/ fake objects are used to assist in unit testing. Unit testing increases confidence in
changing/maintaining code. If good unit tests are written and if they are run every time any code is
changed, we will be able to promptly catch any defects introduced due to the change. Also, if codes
are already made less interdependent to make unit testing possible, the unintended impact of changes
to any code is less. Codes are more reusable. In order to make unit testing possible, codes need to be
modular. This means that codes are easier to reuse.
Development is faster. How? If you do not have unit testing in place, you write your code and
perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs
that hopefully hit your code and hope that you are all set.) But, if you have unit testing in place, you
write the test, write the code and run the test. Writing tests takes time but the time is compensated by
the less amount of time it takes to run the tests; You need not fire up the GUI and provide all those
inputs. And, of course, unit tests are more reliable than ‘developer tests. Development is faster in the
long run too. How? The effort required to find and fix defects found during unit testing is very less in
comparison to the effort required to fix defects found during system testing or acceptance testing. The
cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected
at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during
acceptance testing or when the software is live. Debugging is easy. When a test fails, only the latest
changes need to be debugged. With testing at higher levels, changes made over the span of several
days/weeks/months need to be scanned.
Integration Testing:
Is a level of software testing where individual units are combined and tested as a group? The purpose
of this level of testing is to expose faults in the interaction between integrated units. Test drivers and
test stubs are used to assist in Integration Testing. Testing performed to expose defects in the
interfaces and in the interactions between integrated components or systems.
Bigbangis an approach to Integration Testing where all or most of the units are combined together and
25
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
tested at one go. This approach is taken when the testing team receives the entire software in a
bundle.
26
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
So, what is the difference between Big Bang Integration Testing and System Testing? Well, the
former tests only the interactions between the units while the latter tests the entire system.
TopDownisan approach to Integration Testing where top-level units is tested first and lower-level units
are tested step by step after that. This approach is taken when top-down development approach is
followed. Test Stubs are needed to simulate lower-level units which may not be available during the
initial phases.
System Testing:
It is a level of software testing where a complete and integrated software is tested. The purpose of this
test is to evaluate the system’s compliance with the specified requirements. The process of
manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are
produced separately and unit tested separate and Integration Testing is performed. When the complete
pen is integrated, System Testing is performed.
Regression Testing:
Regression testing is the process of testing changes to computer programs to make sure that the
older programming still works with the new changes. Regression testing is a normal part of the
program development process and, in larger companies, is done by code testing specialists. Test
department coders develop code test scenarios and exercises that will test new units of code after they
have been written. These test cases form what becomes the test bucket. Before a new version of a
software product is released, the old test cases are run against the new version to make sure that all
the old capabilities still work. The reason they might not work is because changing or adding new
code to a program can easily introduce errors into code that is not intended to be changed.
As software is updated or changed, or reused on a modified target, emergence of new faults and/or re-
emergence of old faults is quite common. Sometimes re-emergence occurs because a fix gets
lost through poor revision control practices (or simple human error in revision control). Often, a fix
for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first
observed but not in more general cases which may arise over the lifetime of the software. Frequently,
a fix for a problem in one area inadvertently causes a software bug in another area. Finally, it may
happen that, when some feature is redesigned, some of the same mistakes that were made in the
original implementation of the feature are made in the redesign.
27
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Test Cases for Mood Detection Code:
Sr Use
Description Actors Assumptions Result
No Case
Use
1 Check Cam Camera Camera should be in good condition Pass
Case 1
Use Creation of
2 Datasets Datasets should be created Pass
Case 2 Datasets
Use Mood
4 User User mood should able to identify Pass
Case 4 Identification
28
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Conclusion
The facial expression recognition system presented in this research work contributes a
resilient face recognition model based on the mapping of behavioral characteristics with the
physiological biometric characteristics. The physiological characteristics of the human face with
relevance to various expressions such as happiness, sadness, fear, anger, surprise and disgust are
associated with geometrical structures which restored as base matching template for the
recognition system. The behavioral aspect of this system relates the attitude behind different
expressions as property base. The property bases are alienated as exposed and hidden category in
genetic algorithmic genes.
The gene training set evaluates the expressional uniqueness of individual faces and
provide a resilient expressional recognition model in the field of biometric security. The design
of a novel asymmetric cryptosystem based on biometrics having features like hierarchical group
security eliminatesthe use of passwords and smart cards as opposed to earlier cryptosystems. It
requires a special hardware support like all other biometrics system. This research work promises
a new direction of research in the field of asymmetric biometric cryptosystems which is highly
desirable in order to get rid of passwords and smart cards completely. Experimental analysis and
study show that the hierarchical security structures are effective in geometric shape identification
for physiological traits.
29
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Future Scope
The global emotion detection and recognition market size is projected to grow from USD 19.5 billion
in 2020 to USD 37.1 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 11.3% during
the forecast period. The major factors driving the market growth include the rising need for accretion
of speech-based emotion detection systems to analyse emotional states, Adoption of IoT, AI, ML, and
deep learning technologies across the globe, growing demand in the Automotive AI industry, growing
need for high operational excellence, and rising need for socially intelligent artificial agents.
30
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
References
C.-H. Wu, J.-C. Lin, W.-B. Liang, K.-C. Cheng, "Hierarchical Modelling of Temporal Course
in Emotional Expression for Speech Emotion Recognition" in 2015 Int'1 Workshop on Affective
Social Multimedia Computing (ASMMC 2015), Xi'An, China, 2015.
https://www.marketsandmarkets.com/Market-Reports/emotion
https://www.iflexion.com/blog/emotion-recognition-software
https://ieeexplore.ieee.org/
31
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik