"Mood Detection Using Artificial Intelligence And: Machine Learning

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

MARATHA VIDYA PRASARAK SAMAJ’S

RAJARSHI SHAHU MAHARAJ POLYTECHNIC


NASHIK-422013

PROJECT REPORT ON

“Mood Detection Using Artificial Intelligence and


Machine Learning”

SUBMITTED BY

1.Ashish Sahebrao Shinde (EXAM SEAT NO)


2.Gaurav Nitin Sonawane (EXAM SEAT NO)
3.Tejas Sandeep Sonawane (EXAM SEAT NO)
4.Vishal Punjaram More (EXAM SEAT NO)

UNDER THE GUIDANCE OF


“Prof.G.N.Handge”

DEPARTMENT OF COMPUTER ENGINEERING

MARATHA VIDYA PRASARAK SAMAJ’S


RAJARSHI SHAHU MAHARAJ POLYTECHNIC
NASHIK-422013
2020-2021
MARATHA VIDYA PRASARAK SAMAJ’S
RAJARSHI SHAHU MAHARAJ POLYTECHNIC
NASHIK-422013

CERTIFICATE

This is to certify that the project report entitled “Mood Detection Using
Artificial Intelligence and Machine Learning” has been successfully
completed by
1. Ashish Sahebrao Shinde (EXAM SEAT NO)
2. Gaurav Nitin Sonawane (EXAM SEAT NO)
3. Tejas Sandeep Sonawane (EXAM SEAT NO)
4. Vishal Punjaram More (EXAM SEAT NO)

As partial fulfillment of Diploma course in Computer Engineering


under the Maharashtra State Board of Technical Education,
Mumbai during the academic year 2020-2021.
The said work has been carried out under my guidance, assessed
by us and we are satisfied that, the same is up to the standard envisaged
for the level of the course.

Prof.G.N.Handge Prof.P.D.Boraste Dr. D. B. Uphade


Project Guide Head of Department Principal

External Examiner Institu


te
ACKNOWLEDGEMENT Seal
With all respect and gratitude, I would like to thank all people who have helped me
directly or indirectly for the completion of this Project. I express my heartily gratitude towards
Prof.G.N.Handge for guiding me to understand the work conceptually and also for his constant
encouragement to complete this project on “Mood Detection Using Artificial Intelligence and
Machine Learning ” My association with him as a student has been extremely inspiring. I
express my sincere thanks to him for kind help and guidance. I would like to give my sincere
thanks to Prof.P.D.Boraste, project Co-coordinator& Head of Department of Mechanical
Engineering Department for providing necessary help, providing facilities and time to time
valuable guidance. No words could be good enough to express my deep gratitude to our
honorable respected Principal Dr. D. B. Uphade, and staff members of Mechanical Engineering
department of MarathaVidya Prasarak Samaj’s, Rajarshi Shahu Maharaj Polytechnic Nashik, for
providing all necessary facilities with their constant encouragement and support.
Finally, yet importantly, I would like to express my heartfelt thanks to my beloved parents for
their blessings, my friends and colleagues for their help and wishes for the successful
completion of this project.

1. Ashish Sahebrao Shinde (EXAM SEAT NO)


2. Gaurav Nitin Sonawane (EXAM SEAT NO)
3. Tejas Sandeep Sonawane (EXAM SEAT NO)
4. Vishal Punjaram More (EXAM SEAT NO)
INDEX

Sr.No. Name of Page no.


topic
Abstract I

1 Introduction 1

2 Literature Survey 2

3 Scope of Project 5

4 Requirement Analysis and Specifications 7

5 Methodology and Algorithm 9

6 Modelling and designing 11

7 Source Code 20

8 Testing 26

9 Conclusion 29

10 Future Scope 30

11 References 31
List of the figures: -

Fig. No. Name of figure Page no.

3.1 Expression Detection 6

4.1 Python 8

5.1 System Architecture 9


Basic Flowchart
5.2 10

6.1 Flow chart of Proposed System 11

6.2 Use Case of Proposed System 12

6.3 DFD Level 0 of Proposed System 13

6.4 DFD Level 1 of Proposed System 13

6.5 DFD Level 2 of Proposed System 14

6.6 Face Detection 15

6.7 Neutral Face 16

6.8 Execution 1 17

6.9 Execution 2 18

6.10 Results 19
List of the Tables: -

Tab. No. Title of Table Page no.

2.1 Literature Survey 3

8.1 Test Cases for mood detection 28


Mood Detection Using Artificial Intelligence and Machine
Learning

ABSTRACT
Behaviours, actions, poses, facial expressions and speech; these are considered
as channels that convey human emotions. Extensive research has been carried
out to explore the relationships between these channels and emotions. This
paper proposes a prototype system which automatically recognizes the emotion
Represented on a face. Thus, a neural network-based solution combined with
image processing is used in classifying the universal emotions: Happiness,
Sadness, Anger, Disgust, Surprise and Fear. Coloured frontal face images are
given as input to the prototype system. After the face is detected, image
processing-based feature point extraction method is used to extract a set of
selected feature points. Finally, a set of values obtained after processing those
extracted feature points are given as input to the neural network to recognize the
emotion contained.
Computational analysis of emotions has been considered a challenging and
Interesting task. Researchers rely on various cues such as physiological sensors
And facial expressions to identify human emotions. However, there are few prior
works who work with textual input to analyses these emotions. This survey
attempts to summarize these diverse approaches, datasets and resources that have
been reported for emotional analysis from text. We feel that there is an essential
need to have a collective understanding of the research in this area.
Therefore, we report trends in emotional analysis research. We also present a
research matrix that summarizes past work, and list pointers to future work.
Mood Detection Using Artificial Intelligence and Machine

Introduction

1.1 Overview

The dual fears of identity theft and password hacking are now becoming a reality, where the
only hope of a secure method for preserving data are behavioral systems. Systems which are based on
user behavior are usually understood as behavioral systems. Behavioral traits are almost impossible to
steal. Multiple commercial, civilian, and government entities have already started using behavioral
biometrics to secure sensitive data. One of the major components of behavioral biometrics is the
recognition of facial emotion and its intensity. In the industry and academic research, physiological
traits have been used for identification through biometrics. Any level of biometrics could not be
performed without good sensors, and when it comes to facial emotion intensity recognition, apartfrom
high-quality sensors (cameras), there is a need for efficient algorithms to recognize emotional
intensity in real time.

With the increased use of images over the past decade, the automated facial analytics such as
facial detection, recognition, and expression recognition along with its intensity has gained
importance and are useful in security and forensics. Components such as behavior, voice, posture,
vocal intensity, and emotion intensity of the person depicting the emotion, when combined, help in
measuring and recognizing various emotions. With the advent of modern technology our desires went
high and it binds no bounds. In the present era a huge research work is going on in the field of
digital image and image processing.

The way of progression has been exponential and it is ever increasing. Image Processing
is a vast area of research in present day world and its applications are very widespread. Image
processing is the field of signal processing where both the input and output signals are images.
One of the most important application of Image processing is Facial expression recognition. Our
emotion is revealed by the expressions in our face. Facial Expressions plays an important role in
interpersonal communication. Facial expression is a nonverbal scientific gesture which gets
expressed in our face as per our emotions. Automatic recognition of facial expression plays an
important role in artificial intelligence and robotics and thus it is a need of the generation. Some
application related to this include Personal identification and Access control, Videophone and
Teleconferencing, Forensic application, Human-Computer Interaction, Automated Surveillance,
Cosmetology and so on.

1
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Literature survey

Sr. No Title Publication Description


1 Detection of mood IEEE In mood disorder diagnosis,
disorder using modulation Transactions on bipolar disorder (BD)
spectrum of facial action Vehicular patients are often
unit profiles Technology 2018 misdiagnosed as unipolar
depression (UD) on initial
presentation.
2 Automatic mood 2012 IEEE Music mood describes the
detection and tracking of International WIE inherent emotional
music audio signals Conference on expression of a music clip.
Electrical and It is helpful in music
Computer understanding, music
Engineering retrieval, and some other
(WIECONECE) music-related applications.

3 Mood detection 2009 39th In current


from daily conversational International Spring studies, an extended
speech using denoising Seminar on subjective self- report
autoencoder and LSTM Electronics method is generally used
Technology (ISSE) for measuring emotions.
Even though it is
commonly accepted that
speech emotion
4 Learners’ mood detection IEEE Sensors This research concerns
using Convolutional Journal 2017 about classroom learner’s
Neural Network (CNN) mood detection in learning
process which is believed
to be an important thing to
increase learning process.

5 Automatic mood 2015 This paper proposes a


detection of Indian music International method of identifying the
using mfccs and k-means Conference on mood underlying a piece of
algorithm Computer and music by extracting
Computational suitable and robust features
Sciences (ICCCS) from music clip.

2
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
6 Movie emotional 2015 Movie is one of
event detection based on International the most important
music mood and video Conference on entertainments in the
tempo Technologies for history of mankind. But as
Sustainable interests of viewers are
Development various, it's difficult to deal
(ICTSD) with their requirements by
only one principle.
7 Mood Detection and 2009 5th Studies show that mood
Prediction Based on User International states influence our daily
Daily Activities Colloquium on life quality and activities,
Signal Processing & and this is not the only
Its Applications way around.

8 Development of an Proceedings 2001 In general, music retrieval


intelligent guide- stick for ICRA. IEEE and classification methods
the blind Feature International using music moods use a
selection method for Conference on lot of acoustic features
music mood score Robotics and similar to music genre
detection Automation (Cat. classification.
No.01CH37164)
Table no 2.1: - Literature Survey

3
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Existing System:

A mood-sensing approach for classical music from acoustic data. Thayer's humor model
is adopted for the taxonomy of Humor and three sets of actual characteristics are extracted
directly from the acoustic data that represent intensity, timbre and rhythm, respectively. A
hierarchical framework is used in a music clip. To detect mood in a complete piece of music, a
segmentation scheme is presented to monitor mood. This algorithm achieves a satisfactory
precision in experimental evaluations. The feature extraction method is the SIFT characteristics.
In SIFT functions they are effective for describing the finer edges and appearance characteristics.
Because of deformations corresponding to facial expressions are mainly in the form of lines and
wrinkles. The classification method provided in this document is neural networks. In the artificial
neutral networks are more suitable to face the problem of the recognition of emotions from the
units of action, since these techniques emulate processes of solving unconscious human
problems.

4
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Scope of Project

Significant debate has risen in past regarding the emotions portrayed in the world -famous
masterpiece of Mona Lisa. British Weekly New Scientist‟ has stated that she is in fact a blend of
many different emotions, 83%happy, 9% disgusted, 6% fearful, 2% angry. We have also been
motivated observing the benefits of physically handicapped people like deaf and dumb. But if any
normal human being or an automated system can understand their needs by observing their facial
expression then it becomes a lot easier for them to make the fellow human or automated system
understand their needs.

3.1 Problem Statement:

In the existing situation people are facing many emotional issues through which they go
under enormous depression and stress it could be because of different issues related to Facial or
personal issues so to reduce this we are going to make a system which would try to relax them
from these emotional issues.

3.2 Proposed System:

These Human facial expressions convey a lot of information visually rather than
articulately. Facialexpressionrecognitionplaysacrucialroleinthearea ofhuman-machine
interaction. Automatic facial expression recognition system has many applications including, but
not limited to, human behavior understanding, detection of mental disorders, and synthetic
human expressions. Recognition of facial expression by computer with high recognition rate is
still a challenging task. Two popular methods utilized mostly in the literature for the automatic
FER systems are based on geometry and appearance. Facial Expression Recognition usually
performed in four-stages consisting of pre-processing, face detection, feature extraction, and
expression classification. In this project we applied various deep learning methods
(convolutional neural networks) to identify the key seven human emotions: anger, disgust, fear,
happiness, sadness, surprise andneutrality.

5
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Requirement Analysis and Specifications

The requirement analysis includes the Functional and Non-Function requirements, and
Hardware and Software requirements. The non-functional requirements include feasibility, reliability,
scalability. There are no any extra hardware e requirements. The software requirements include the
programming languages like Python.

4.1 FUNCTIONAL REQUIREMENT

4.1.1 Interface Requirement


 The system is capable to accept and transmit the raw data which may be in the form of
digital that is numeric values.
 Audit Trail
 For each activity, the data will be recorded in the application audit trail.
 Capacity
 The system is enough capable to hold the data and process on it.

4.2 NON-FUNCTIONAL REQUIREMENT

 Maintainability: Human resources is not required to maintain the components and collect the
raw data from each of the components.
 Reusability: The components are compatible for changing environment and supports
upgradeability.
 Availability: The system is functional throughout and data transfer takes place only when
user requests.
 Usability: The system is user friendly as it uses a simple networking model like a ZigBee.
 Reliability: The system is highly consistent and reliable.

4.3 HARDWARE REQUIREMENT

 PC/Laptop:
 i7 Processor
 8 GB RAM (Min)
 1 TB HDD

4.3 SOFTWARE REQUIREMENT

 Python
 PyCharm

6
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 4.1 Python

7
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Methodology and Algorithm

The proposed system will introduce chatbot which is nothing but a computer program. The
conversation via auditory or textual methods is conducted by this computer program which nothing
other than chatbot. The bot chats with person in such a way that it never makes person understand that
it’s actually the computer with he is chatting. There will be an automatic interface for jokes and songs
as per the users’ mood. The system will able to detect stress and on detecting the stress some
inspirational quotes will pop on the screen. And also, system will able to provide some links to web
pages of motivational speech. The data provide by system will boost the mood which make the user
to work efficiently and leads to enhancement in performance.

Fig 5.1 System Architecture

8
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
These Human facial expressions convey a lot of information visually rather than
articulately. Facialexpressionrecognitionplaysacrucialroleinthearea of human-
machineinteraction. Automatic facial expression recognition system has many applications
including, but not limited to, human behavior understanding, detection of mental disorders, and
synthetic human expressions. Recognition of facial expression by computer with high
recognition rate is still a challenging task. Two popular methods utilized mostly in the literature
for the automatic FER systems are based on geometry and appearance. Facial Expression
Recognition usually performed in four-stages consisting of pre-processing, face detection,
feature extraction, and expression classification. In this project we applied various deep learning
methods (convolutional neural networks) to identify the key seven human emotions: anger,
disgust, fear, happiness, sadness, surprise andneutrality.

Fig 5.2 Basic Flow Chart for Emotion Detection

9
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Modelling and Designing

Fig 6.1 Flowchart of Proposed System

10
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 6.2 Use Case diagram of Proposed System

11
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 6.3 DFD Level 0 for proposed system

Fig. 6.4 DFD Level 1 for proposed system


12
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig. 6.5 DFD Level 2 for Proposed System

13
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
SCREENSHOTS

Fig 6.6 face detection

14
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.7 neutral face

15
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.8 Execution 1

16
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.9 Execution 2

17
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Fig 6.10 Results

18
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Chapter 7
Source Code
import matplotlib
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import matplotlib.pyplot as plt
import numpy as np
import json
import subprocess
from sklearn import datasets
import FileDialog # Needed for Pyinstaller

import sys
if sys.version_info[0] < 3:
import Tkinter as Tk
else:
import tkinter as Tk

print(_doc_)

faces = datasets.fetch_olivetti_faces()

#
==================================================================
========
# Traverses through the dataset by incrementing index & records the result
#
==================================================================
========
class Trainer:
def _init_(self):
self.results = {}
self.imgs = faces.images
self.index = 0

def reset(self):
print "============================================"
print "Resetting Dataset & Previous Results.. Done!"
print "============================================"
self.results = {}
self.imgs = faces.images
self.index = 0
19
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def increment_face(self):
if self.index + 1 >= len(self.imgs):
return self.index
else:
while str(self.index) in self.results:
# print self.index
self.index += 1
return self.index

def record_result(self, smile=True):


print "Image", self.index + 1, ":", "Happy" if smile is True else "Sad"
self.results[str(self.index)] = smile

# ===================================
# Callback function for the buttons
# ===================================
## smileCallback() : Gets called when "Happy" Button is pressed
## noSmileCallback() : Gets called when "Sad" Button is pressed
## updateImageCount() : Displays the number of images processed
## displayFace() : Gets called internally by either of the button presses
## displayBarGraph(isBarGraph) : computes the bar graph after classification is
completed 100%
## _begin() : Resets the Dataset & Starts from the beginning
## _quit() : Quits the Application
## printAndSaveResult() : Save and print the classification result
## loadResult() : Loading the previously stored classification result
## run_once(m) : Decorator to allow functions to run only once

def run_once(m):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return m(*args, **kwargs)
wrapper.has_run = False
return wrapper

def smileCallback():
trainer.record_result(smile=True)
trainer.increment_face()
displayFace(trainer.imgs[trainer.index])
updateImageCount(happyCount=True, sadCount= False)

20
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def noSmileCallback():
trainer.record_result(smile=False)
trainer.increment_face()
displayFace(trainer.imgs[trainer.index])
updateImageCount(happyCount=False, sadCount=True)

def updateImageCount(happyCount, sadCount):


global HCount, SCount, imageCountString, countString # Updating only when called by
smileCallback/noSmileCallback
if happyCount is True and HCount < 400:
HCount += 1
if sadCount is True and SCount < 400:
SCount += 1
if HCount == 400 or SCount == 400:
HCount = 0
SCount = 0
# --- Updating Labels
# -- Main Count
imageCountPercentage = str(float((trainer.index + 1) * 0.25)) \
if trainer.index+1 < len(faces.images) else "Classification DONE! 100"
imageCountString = "Image Index: " + str(trainer.index+1) + "/400 " + "[" +
imageCountPercentage + " %]"
labelVar.set(imageCountString) # Updating the Label (ImageCount)
# -- Individual Counts
countString = "(Happy: " + str(HCount) + " " + "Sad: " + str(SCount) + ")\n"
countVar.set(countString)

@run_once
def displayBarGraph(isBarGraph):
ax[1].axis(isBarGraph)
n_groups = 1 # Data to plot
Happy, Sad = (sum([trainer.results[x] == True for x in trainer.results]),
sum([trainer.results[x] == False for x in trainer.results]))
index = np.arange(n_groups) # Create Plot
bar_width = 0.5
opacity = 0.75
ax[1].bar(index, Happy, bar_width, alpha=opacity, color='b', label='Happy')
ax[1].bar(index + bar_width, Sad, bar_width, alpha=opacity, color='g', label='Sad')
ax[1].set_ylim(0, max(Happy, Sad)+10)
ax[1].set_xlabel('Expression')

21
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
ax[1].set_ylabel('Number of Images')
ax[1].set_title('Training Data Classification')
ax[1].legend()

@run_once
def printAndSaveResult():
print trainer.results # Prints the results
with open("../results/results.xml", 'w') as output:
json.dump(trainer.results, output) # Saving The Result

@run_once
def loadResult():
results = json.load(open("../results/results.xml"))
trainer.results = results

def displayFace(face):
ax[0].imshow(face, cmap='gray')
isBarGraph = 'on' if trainer.index+1 == len(faces.images) else 'off' # Switching Bar
Graph ON
if isBarGraph is 'on':
displayBarGraph(isBarGraph)
printAndSaveResult()
# f.tight_layout()
canvas.draw()

def _opencv():
print "\n\n Please Wait......."
opencvProcess = subprocess.Popen("Train Classifier and Test Video Feed.py",
close_fds=True, shell=True)
# os.system('"Train Classifier.exe"')
# opencvProcess.communicate()

def _begin():
trainer.reset()
global HCount, SCount
HCount = 0
SCount = 0
updateImageCount(happyCount=False, sadCount=False)
displayFace(trainer.imgs[trainer.index])

22
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
def _quit():
root.quit() # stops mainloop
root.destroy() # this is necessary on Windows to prevent
# Fatal Python Error: PyEval_RestoreThread: NULL tstate

if _name_ == "_main_":
# Embedding things in a tkinter plot & Starting tkinter plot
matplotlib.use('TkAgg')
root = Tk.Tk()
root.wm_title("Emotion Recognition Using Scikit-Learn & OpenCV")

# =======================================
# Class Instances & Starting the Plot
# =======================================
trainer = Trainer()

# Creating the figure to be embedded into the tkinter plot


f, ax = plt.subplots(1, 2)
ax[0].imshow(faces.images[0], cmap='gray')
ax[1].axis('off') # Initially keeping the Bar graph OFF

# ax tk.DrawingArea
# Embedding the Matplotlib figure 'f' into Tkinter canvas
canvas = FigureCanvasTkAgg(f, master=root)
canvas.show()
canvas.get_tk_widget().pack(side=Tk.TOP, fill=Tk.BOTH, expand=1)

print "Keys in the Dataset: ", faces.keys()


print "Total Images in Olivetti Dataset:", len(faces.images)

# Declaring Button & Label Instances


# =======================================
smileButton = Tk.Button(master=root, text='Smiling', command=smileCallback)
smileButton.pack(side=Tk.LEFT)

noSmileButton = Tk.Button(master=root, text='Not Smiling',


command=noSmileCallback)
noSmileButton.pack(side=Tk.RIGHT)

labelVar = Tk.StringVar()
label = Tk.Label(master=root, textvariable=labelVar)

23
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
imageCountString = "Image Index: 0/400 [0 %]" # Initial print
labelVar.set(imageCountString)
label.pack(side=Tk.TOP)

countVar = Tk.StringVar()
HCount = 0
SCount = 0
countLabel = Tk.Label(master=root, textvariable=countVar)
countString = "(Happy: 0 Sad: 0)\n" # Initial print
countVar.set(countString)
countLabel.pack(side=Tk.TOP)

opencvButton = Tk.Button(master=root, text='Load the "Trained Classifier" & Test


Output', command=_opencv)
opencvButton.pack(side=Tk.TOP)

resetButton = Tk.Button(master=root, text='Reset', command=_begin)


resetButton.pack(side=Tk.TOP)

quitButton = Tk.Button(master=root, text='Quit Application', command=_quit)


quitButton.pack(side=Tk.TOP)

authorVar = Tk.StringVar()
authorLabel = Tk.Label(master=root, textvariable=authorVar)
authorString = "\n\n Developed By: " \
"\n Saurabh Dashpute and Kunal Sonawne " \
"\n (Kunal957ss@gmail.com) " \
"\n [Final Year Project - 2021]" # Initial print
authorVar.set(authorString)
authorLabel.pack(side=Tk.BOTTOM)

root.iconbitmap(r'..\icon\happy-sad.ico')
Tk.mainloop() # Starts mainloop required by Tk

24
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Testing

Unit testing:

It a level of software testing where individual units/components of a software are tested. The purpose
is to validate that each unit of the software performs as designed. A unit is the smallest testable part of
any software. It usually has one or a few inputs and usually a single output. In procedural
programming, a unit may be an individual program, function, procedure, etc. In object-oriented
programming, the smallest unit is a method, which may belong to a base/ super class, abstract class or
derived/ child class. (Some treat a module of an application as a unit. This is to be discouraged as
there will probably be many individual units within that module. Unit testing frameworks, drivers,
stubs, and mock/ fake objects are used to assist in unit testing. Unit testing increases confidence in
changing/maintaining code. If good unit tests are written and if they are run every time any code is
changed, we will be able to promptly catch any defects introduced due to the change. Also, if codes
are already made less interdependent to make unit testing possible, the unintended impact of changes
to any code is less. Codes are more reusable. In order to make unit testing possible, codes need to be
modular. This means that codes are easier to reuse.

Development is faster. How? If you do not have unit testing in place, you write your code and
perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs
that hopefully hit your code and hope that you are all set.) But, if you have unit testing in place, you
write the test, write the code and run the test. Writing tests takes time but the time is compensated by
the less amount of time it takes to run the tests; You need not fire up the GUI and provide all those
inputs. And, of course, unit tests are more reliable than ‘developer tests. Development is faster in the
long run too. How? The effort required to find and fix defects found during unit testing is very less in
comparison to the effort required to fix defects found during system testing or acceptance testing. The
cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected
at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during
acceptance testing or when the software is live. Debugging is easy. When a test fails, only the latest
changes need to be debugged. With testing at higher levels, changes made over the span of several
days/weeks/months need to be scanned.

Integration Testing:

Is a level of software testing where individual units are combined and tested as a group? The purpose
of this level of testing is to expose faults in the interaction between integrated units. Test drivers and
test stubs are used to assist in Integration Testing. Testing performed to expose defects in the
interfaces and in the interactions between integrated components or systems.

Bigbangis an approach to Integration Testing where all or most of the units are combined together and
25
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
tested at one go. This approach is taken when the testing team receives the entire software in a
bundle.

26
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
So, what is the difference between Big Bang Integration Testing and System Testing? Well, the
former tests only the interactions between the units while the latter tests the entire system.

TopDownisan approach to Integration Testing where top-level units is tested first and lower-level units
are tested step by step after that. This approach is taken when top-down development approach is
followed. Test Stubs are needed to simulate lower-level units which may not be available during the
initial phases.

System Testing:

It is a level of software testing where a complete and integrated software is tested. The purpose of this
test is to evaluate the system’s compliance with the specified requirements. The process of
manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are
produced separately and unit tested separate and Integration Testing is performed. When the complete
pen is integrated, System Testing is performed.

Regression Testing:

Regression testing is the process of testing changes to computer programs to make sure that the
older programming still works with the new changes. Regression testing is a normal part of the
program development process and, in larger companies, is done by code testing specialists. Test
department coders develop code test scenarios and exercises that will test new units of code after they
have been written. These test cases form what becomes the test bucket. Before a new version of a
software product is released, the old test cases are run against the new version to make sure that all
the old capabilities still work. The reason they might not work is because changing or adding new
code to a program can easily introduce errors into code that is not intended to be changed.

As software is updated or changed, or reused on a modified target, emergence of new faults and/or re-
emergence of old faults is quite common. Sometimes re-emergence occurs because a fix gets
lost through poor revision control practices (or simple human error in revision control). Often, a fix
for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first
observed but not in more general cases which may arise over the lifetime of the software. Frequently,
a fix for a problem in one area inadvertently causes a software bug in another area. Finally, it may
happen that, when some feature is redesigned, some of the same mistakes that were made in the
original implementation of the feature are made in the redesign.

27
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Test Cases for Mood Detection Code:

Sr Use
Description Actors Assumptions Result
No Case

Use
1 Check Cam Camera Camera should be in good condition Pass
Case 1

Use Creation of
2 Datasets Datasets should be created Pass
Case 2 Datasets

Use User should able to trained as many as


3 Trained images Images Pass
Case 3 images he/she wants

Use Mood
4 User User mood should able to identify Pass
Case 4 Identification

Use Prediction Links


5 User Pass
Case 5 of links should be predict to user

Use System System


6 System Pass
Case 6 Output should give expected output

Table 8.1 test cases for mood detection

28
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Conclusion

The facial expression recognition system presented in this research work contributes a
resilient face recognition model based on the mapping of behavioral characteristics with the
physiological biometric characteristics. The physiological characteristics of the human face with
relevance to various expressions such as happiness, sadness, fear, anger, surprise and disgust are
associated with geometrical structures which restored as base matching template for the
recognition system. The behavioral aspect of this system relates the attitude behind different
expressions as property base. The property bases are alienated as exposed and hidden category in
genetic algorithmic genes.
The gene training set evaluates the expressional uniqueness of individual faces and
provide a resilient expressional recognition model in the field of biometric security. The design
of a novel asymmetric cryptosystem based on biometrics having features like hierarchical group
security eliminatesthe use of passwords and smart cards as opposed to earlier cryptosystems. It
requires a special hardware support like all other biometrics system. This research work promises
a new direction of research in the field of asymmetric biometric cryptosystems which is highly
desirable in order to get rid of passwords and smart cards completely. Experimental analysis and
study show that the hierarchical security structures are effective in geometric shape identification
for physiological traits.

29
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
Future Scope
The global emotion detection and recognition market size is projected to grow from USD 19.5 billion
in 2020 to USD 37.1 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 11.3% during
the forecast period. The major factors driving the market growth include the rising need for accretion
of speech-based emotion detection systems to analyse emotional states, Adoption of IoT, AI, ML, and
deep learning technologies across the globe, growing demand in the Automotive AI industry, growing
need for high operational excellence, and rising need for socially intelligent artificial agents.

30
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik
References

 C.-H. Wu, J.-C. Lin, W.-B. Liang, K.-C. Cheng, "Hierarchical Modelling of Temporal Course
in Emotional Expression for Speech Emotion Recognition" in 2015 Int'1 Workshop on Affective
Social Multimedia Computing (ASMMC 2015), Xi'An, China, 2015.

 https://www.marketsandmarkets.com/Market-Reports/emotion

 https://www.iflexion.com/blog/emotion-recognition-software

 https://ieeexplore.ieee.org/

 C.-H. Chen, B. Lennox, R. Jacob, A. Calder, V. Lupson, R. Bisbrown-Chippendale, J.


Suckling, E. Bullmore, "Explicit and implicit facial affect recognition in manic and depressed
states of bipolar disorder: a functional magnetic resonance imaging study", vol. 59, pp. 31-
39, 2006.

31
MVP’s Rajarshi Shahu Maharaj Polytechnic, Nashik

You might also like