SRS Documentation
SRS Documentation
SRS Documentation
A
PROJECT REPORT
on
“Smart Attendance System in Crowded Classroom”
Submitted in Partial Fulfillment of the requirements for the award of the degree of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted By
Palash Ghosh 1SJ18CS068
Pichili Sai Charan Reddy 1SJ18CS072
Veera Jaswanth Reddy 1SJ18CS115
Yuvraj Naorem 1SJ18CS124
Carried out at
B G S R&D Centre,
Dept of CSE,
SJCIT
S J C INSTITUTE OF TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CHIKKABALLAPUR-562 101
2021-2022
DECLARATION
i
ABSTRACT
Affecting the learning process of students, which has predisposed many of them to
become familiar with this new teaching process, making the use of virtual platforms more
common. Many educational centres have come to rely on digital tools such as: Discord,
Google Meet, Microsoft Team, Skype and Zoom for tracking students attendance. Face
is the crucial part of the human body that uniquely identifies a person. Using the face
characteristics as biometric, the face recognition system can be implemented. The most
demanding task in any organization is attendance marking. In traditional attendance
system, the students are called out by the teachers and their presence or absence is marked
accordingly. However, the traditional techniques are time consuming and tedious. In this
project, the Open CV based face recognition approach has been proposed. This model
integrates a camera that captures an input image, an algorithm for detecting face from an
input image, encoding and identifying the face, marking the attendance in a spreadsheet
and storing in system. The training database is created by training the system with the
faces of the authorized students. The cropped images are then stored as a database with
respective labels. The features are extracted using LBPH algorithm. The primary
objective of this project is to create a automated attendance system that can provide
information to teachers.
ii
ACKNOWLEDGEMENT
With reverential pranam, we express my sincere gratitude and salutations to the feet
of his holiness Paramapoojya Jagadguru Byravaikya Padmabhushana Sri Sri Sri Dr.
Balagangadharanatha Maha Swamiji, his holiness Paramapoojya Jagadguru Sri Sri
Sri Dr. Nirmalanandanatha Maha Swamiji, and Sri Sri Sri Mangalnath Swamiji , Sri
Adichunchanagiri Mutt for their unlimited blessings.
First and foremost we wish to express our deep sincere feelings of gratitude to our
institution, Sri Jagadguru Chandrashekaranatha Swamiji Institute of Technology, for
providing us an opportunity for completing the Project Work Phase-II successfully.
We extend deep sense of sincere gratitude to Dr. G T Raju, Principal, S J C Institute
of Technology, Chickballapur, for providing an opportunity to complete the Project Work
Phase-II.
We extend special in-depth, heartfelt, and sincere gratitude to HOD Dr. Manjunatha
Kumar B H, Head of the Department, Computer Science and Engineering, S J C
Institute of Technology, Chickballapur, for his constant support and valuable guidance
of the Project Work Phase-II.
We convey our sincere thanks to Project Guide Dr. Murthy SVN, Associate
Professor, Department of Computer Science and Engineering, S J C Institute of
Technology, for his constant support, valuable guidance and suggestions of the Project
Work Phase-II.
We also feel immense pleasure to express deep and profound gratitude to Project Co-
ordinators Prof. PradeepKumar G M and Prof. Shrihari MR, Assistant Professors,
Department of Computer Science and Engineering, S J C Institute of Technology, for
their guidance and suggestions of the Project work.
Finally, we would like to thank all faculty members of Department of Computer
Science and Engineering, S J C Institute of Technology, Chickaballapur for their support.
We also thank all those who extended their support and co-operation while bringing
out this Project Work Phase-II
Abstract ii
Acknowledgement iii
Table of Contents iv
1 INTRODUCTION 1-3
1.1 Overview 1
1.4 Objectives 3
1.5 Methodology 3
iv
4.1 Existing System 12
4.1.1 Limitations 12
4.2.1 Advantages 12
6 IMPLEMENTATION 19-25
Source code 20
7 TESTING 26-31
Test Cases 30
v
9 CONCLUSION AND FUTURE 34
ENHANCEMENT
BIBLIOGRAPHY 35-36
APPENDIX 37-39
Appendix B: Abbreviations 39
v
LIST OF FIGURES
FIGURE NO. FIGURE TITLE PAGE NO.
5.2 Activity Diagram 15
A1 Register Page 37
A2 Registering Face 37
A3 Trained Images 38
A4 Student Details 38
vii
LIST OF TABLES
TABLE NO. TABLE TITLE PAGE NO.
7.1.4.1 Test Case 1 30
viii
CHAPTER – 1
INTRODUCTION
INTRODUCTION
1.1 Overview
Every day, the CCTV system operates to monitor the inside of a building for security.
The system’s resources allow developers to build computer vision-based applications to
integrate with CCTV. Face recognition is an excellent biometric technique for identity
authentication. It is possible to apply FR technology for automatic attendance taking at
schools. There are several benefits from attendance considering using the existing
camera system, such as save time and effort, provide striking evidence for quality
assurance and human resource management tasks, avoid intermediary of infectious
diseases. The existing attendance taking system that uses fingerprint recognition is facing
several challenges due to large intra-class variability and substantial inter-class similarity
mentioned by Dyre and Sumathi. Ngo et al. combined the data from the academic portal
with different FR techniques for the task of taking attendance in the classroom. The result
shows that their system works smoothly. However, the investment costs for procurement,
camera installation at the school, and a large number of video processing are expensive.
In the last decade, crowd counting and localization attract much attention of researchers
due to its wide-spread applications, including crowd monitoring, public safety, space
design, etc. Many Convolutional Neural Networks (CNN) are designed for tackling this
task. However, currently released datasets are so small-scale that they cannot meet the
needs of the supervised CNN-based algorithms. Compared with other real-world
datasets, it contains various illumination scenes and has the largest density range.
Besides, a benchmark website is developed for impartially evaluating the different
methods, which allows researchers to submit the results of the test set. Based on the
proposed dataset, we further describe the data characteristics, evaluate the performance
of some mainstream state-of-the-art methods, and analyse the new problems that arise
on the new data.
1
Smart Attendance System in Crowded Classroom Introduction
The Radio Frequency Identification (RFID) helps to identify a large number of crowds
using radio waves. It has high efficiency and hands- free access control. But it is observed
that it canbe misused.
• Reduced Storage: The cost of commercial property and the need to store
documentation for e.g. retrieval, regulatory compliance means that paper based
project storage competes with people for space within an organization. Scanning
projects and integrating them into a project management system can greatly reduce
the amount of primes to rage space required by paper.
• Flexible Indexing: Indexing paper in more than one way can be done, but it is
awkward, costly and time-consuming. Images of projects stored within a attendance
management system can be indexed in several different ways simultaneously.
• Improved, faster and more flexible attendance taking: Smart attendance
management systems can retrieve files by any word or phrase in the document –
known as full text search -a capability that is impossible with paper.
• Improved Security: A attendance management system can provide better, more
flexible control over sensitive projects.
• Disaster Recovery: A Attendance management system provides an easy way to
back-up attendance for offsite storage and disaster recovery providing failsafe
archives and an effective disaster recovery strategy.
1.4 Objectives
1.5 Methodology
In this project we are using the two algorithms
• Introduction
This chapter tells about the problem statement, background of the project, motivation,
its existing system and its effect as well as proposed system with its theoretical
outline.
• Literature Survey
Gives brief overview of the paper and the research sources that have been studied to
establish through an understanding of the under consideration.
• System Requirements
Discuss in detail about the different kind of requirements needed to successfully
complete the project.
• Expected Outcome
Gives details about expected outcome of the project, like what is the format of the
last process output, in this case it is result of students data filled in csv file where
admin can download it for documental proof.
• Advantages and Application
Gives the description of the project advantages and application. It will save time
which is gradually more in manual way of taking attendance and since it is
machine generatedthere is less scope of errors.
In particular, the main contribution of this paper is the introduction of a new method, Max- Margin
Object Detection (MMOD), for learning to detect objects in images. This method does not perform
any sub-sampling, but instead optimizes over all sub-windows. MMOD can be used to improve any
object detection method which is linear in the learned parameters, such as HOG or bag-of-visual-
word models.
Advantages:
MMOD optimizes the overall accuracy of the entire detector, taking into account information
which is typically ignored when training a detector.
Disadvantages:
It is not support for more-complex scoring functions, possibly by using kernels.
4
Smart Attendance System in Crowded Classroom Literature Survey
2. Title: Face Recognition-Based Mobile Automatic Classroom Attendance System
Author: Samet, Refik, and Muhammed Tanriverdi
Abstract: Classroom attendance check is a contributing factor to student participation and the final
success in the courses. Taking attendance by calling out names or passing around an attendance
sheet are both time-consuming, and especially the latter is open to easy fraud. As an alternative,
RFID, wireless, fingerprint, and iris and face recognition-based methods. disadvantage. The present
paper aims to propose a face recognition-based mobile automatic classroom attendance
management system needing no extra equipment. To this end, a filtering system based on
Euclidean distances calculated by three face recognition techniques, namely Eigen faces, Fisher
faces and Local Binary Pattern, has been developed for face recognition. The proposed system
includes three different mobile applications for teachers, students, and parents to be installed on
their smartphones to manage and perform the real- time attendance-taking process. The proposed
system wasted among students at Ankara University, and the results obtained were very satisfactory.
Advantages:
• System eliminates the cost for extra equipment, minimizes attendance-taking time, and allows
users to access the data anytime and anywhere.
• Smart devices are very user friendly to perform classroom attendance monitoring.
• Teachers, students, and parents can use the application without any restrictions and in real-time.
Disadvantages:
• Detection and recognition processes could be performed on smart devices once their processor
capacity is sufficiently increased.
• Minimizing the manual labour and pressure on the lecturers for accurate marking of the
attendance
• Minimizing the time required for marking attendance and maximizing the time requiredfor
actual teaching process.
Disadvantages:
• It requires huge space and huge training data.
Advantages:
• It helps to improve the photometric recovery in terms of PSNR/SSIM.
• Provides a solution for accurate geometry estimation directly from very LR images.
Disadvantages:
• It recognize a single face only.
5.Title: Trunk-branch ensemble convolutional neural networks for video-based face recognition
Author: C. Ding and D. Tao
Abstract: Human faces in surveillance videos often suffer from severe image blur, dramatic pose
variations, and occlusion. In this paper, we propose a comprehensive framework based on
Advantages:
• It improves the accuracy upto 85%.
Disadvantages:
• There is no automated attendance system.
Advantages:
• It improves accuracy in attendance upto 93.7%.
Disadvantages:
It consumes large set of dataset and huge resources.
The SRS talks about the item however not the venture that created it, consequently the
SRS serves as a premise for later improvement of the completed item. The SRS may
need to be changed, however it does give an establishment to proceed with creation
assessment. In straightforward words, programming necessity determination is the
beginning stage of the product improvement action. The SRS means deciphering the
thoughts in the brains of the customers – the information, into a formal archive – the
yield of the prerequisite stage. Subsequently the yield of the stage is a situated of formally
determined necessities, which ideally are finished and steady, while the data has none of
these properties.
8
Smart Attendance System in Crowded Classroom Requirements And Specification
RAM :8 GB RAM
Technology : Python
Tools : Anaconda
• Accessibility
• Availability
• Backup
• Certification
• Compliance
• Configuration Management
• Documentation
• Disaster Recovery
• Efficiency(resource consumption for given load)
• Interoperability
• The requirement specification for any system can be broadly stated as given below:
4.1.1 Limitations
• The automated system requires a continuous power supply to function. As such, if
you face power cut off, for say 3 to 4 days, you will not be able to record the
attendance of people.
• Existing methods do not deal with accuracy Though the automated system has some
drawbacks, with proper maintenance, you can keep it running without any errors and
enjoy an increased profit.
12
Smart Attendance System in Crowded Classroom System Analysis
• Helps in saving time of teacher for respective class.
• Teachers can take lecture without thinking about manual way of taking attendance
that actually helps improving effective ways of teachings.
The system “design” is defined as the process of applying various requirements and
permits physical realization. Various design features are followed to develop the system
the design specification describes the features of the system, the opponent or elements of
the system and their appearance to the end-users.
5.1 Modules
• Client: This application is run by teacher where camera will open and student’s
video is captured on screen. Details of each frame are shared is sent to other modules
for processing and analysing with trained model.
• Server Module: This module is executed to track details of student and analyse
actual performance. Each frame is sent to face processing module for checking with
trained model. Server Module is used to process data between client and face
processing module.
• Face Processing Module: This module each frame is taken as input and shape
predictor model is used to predict various aspects of features like (jawline aspect
ratio, mouth aspect ratio, head pose. After calculating these values are sent to server
module.
14
Smart Attendance System in Crowded Classroom System Design
Start this
application
Capture Frames
Preproces
s
Training
Admin
Face recognize
Mark
Attendance
System
Features 2.1
2.
2
Read
2.1
Traine
d 2.
Data 3
matches
? Train
Mac
hine
Save
attendan
ce
19
Smart Attendance System in Crowded Classroom Implementation
SOURCE CODE
import tkinter as tk
from tkinter import Message ,Text
import cv2,os
import shutil
import csv
import numpy as np
from PIL import Image, ImageTk
import pandas as pd
#import datetime
import time
from datetime import datetime
##import font
##import Tkinter.ttk as ttk
window = tk.Tk()
#helv36 = tk.Font(family='Helvetica', size=36, weight='bold')
window.title("Face_Recogniser")
dialog_title = 'QUIT'
dialog_text = 'Are you sure?'
#answer = messagebox.askquestion(dialog_title, dialog_text)
#window.geometry('1280x720')
window.configure(background='blue')
#window.attributes('-fullscreen', True)
window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
message.place(x=200, y=20)
def clear():
txt.delete(0, 'end')
res = ""
message.configure(text= res)
def clear2():
txt2.delete(0, 'end')
res = ""
message.configure(text= res)
def is_number(s):
try:
float(s)
return True
except ValueError:
pass
try:
import unicodedata
unicodedata.numeric(s)
return True
except (TypeError, ValueError):
pass
return False
def TakeImages():
co=['Id']
df=pd.read_csv("StudentDetails\StudentDetails.csv",names=co)
namess = df['Id']
ides=[]
#print'Id:'
#print namess
Id=(txt.get())
ides=Id
#print 'Id='
Dept. Of CSE, SJCIT. 21 2021-22
Smart Attendance System in Crowded Classroom Implementation
#print ides
name=(txt2.get())
estest=0
if ides in namess:
estest=1
else:
estest=0
#print estest
if (estest==0):
if(is_number(Id) and name.isalpha()):
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
sampleNum=0
while(True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
#incrementing sample number
sampleNum=sampleNum+1
#saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ "+name +"."+Id +'.'+ str(sampleNum) + ".jpg",
gray[y:y+h,x:x+w])
#display the frame
cv2.imshow('frame',img)
#wait for 100 miliseconds
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum>200:
break
cam.release()
cv2.destroyAllWindows()
res = "Images Saved for ID : " + Id +" Name : "+ name
row = [Id , name]
with open('StudentDetails\StudentDetails.csv','a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message.configure(text= res)
else:
if(is_number(Id)):
res = "Enter Alphabetical Name"
message.configure(text= res)
if(name.isalpha()):
res = "Enter Numeric Id"
message.configure(text= res)
else:
res = "Already Id Exist"
message.configure(text= res)
Dept. Of CSE, SJCIT. 22 2021-22
Smart Attendance System in Crowded Classroom Implementation
def TrainImages():
recognizer = cv2.face_LBPHFaceRecognizer.create()#recognizer =
cv2.face.LBPHFaceRecognizer_create()#$cv2.createLBPHFaceRecognizer()
harcascadePath = "haarcascade_frontalface_default.xml"
detector =cv2.CascadeClassifier(harcascadePath)
faces,Id = getImagesAndLabels("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("TrainingImageLabel\Trainner.yml")
res = "Image Trained"#+",".join(str(f) for f in Id)
message.configure(text= res)
def getImagesAndLabels(path):
#get the path of all the files in the folder
imagePaths=[os.path.join(path,f) for f in os.listdir(path)]
#print(imagePaths)
def TrackImages():
recognizer =
cv2.face.LBPHFaceRecognizer_create()#cv2.createLBPHFaceRecognizer()
recognizer.read("TrainingImageLabel\Trainner.yml")
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);
df=pd.read_csv("StudentDetails\StudentDetails.csv")
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id','Name','Date','Time']
co=['name']
attendance = pd.DataFrame(columns = col_names)
namess=""
else:
Id='Unknown'
tt=str(Id)
if(conf > 75):
import os
noOfFile=len(os.listdir("ImagesUnknown"))+1
#cv2.imwrite("ImagesUnknown\Image"+str(noOfFile) + ".jpg",
im[y:y+h,x:x+w])
cv2.putText(im,str(tt),(x,y+h), font, 1,(255,255,255),2)
attendance=attendance.drop_duplicates(subset=['Id'],keep='first')
cv2.imshow('im',im)
#from datetime import datetime
local = datetime.now()
aa= local.strftime("%M")
#print aa
status=0
#break
if (cv2.waitKey(1)==ord('q')):
cam.release()
cv2.destroyAllWindows()
break
window.mainloop()
7.1 Testing
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. In simple words, testing is
executing a system in order to identify any gaps, errors, or missing requirements in
contrary to the actual requirements.
• Unit Testing
• Integration Testing
• Regression Testing
• Smoke Testing
• Alpha Testing
• Beta Testing
• System Testing
26
Smart Attendance System in Crowded Classroom Testing
The above Black Box can be any software system you want to test. For example : an
operating system like Windows, a website like Google ,a database like Oracle or even
your own custom application. Under Black Box Testing , you can test these applications
by just focusing on the inputs and outputs without knowing their internal code
implementation.
Black box testing - Steps
Here are the generic steps followed to carry out any type of Black Box Testing.
Remarks: Pass.
Remarks: Pass.
• System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
• The application is tested thoroughly to verify that it meets the functional and
technical specifications.
• The application is tested in an environment that is very close to the production
environment where the application will be deployed .
• System testing enables us to test, verify, and validate both the business requirements
as well as the application architecture.
• System Testing is shown in below tables.
Remarks: - Pass
In this section, we compare the performance of recognising student faces using HAAR
cascade and LBPH with the voiceprint recognition system proposed by Yang et al,
fingerprint recognition system proposed by Adeniji et al and the RFID-based system
proposed by Rjeib te al in terms of technologies, time consumption, data accuracy, costs
and privacy sensitivity. The results are shown in Table 1.
From Table 8.1, first of all, it can be seen from the table that the average time
consumption of proposed system is better than of Adeniji’s systems. It is because the
biometrics-based system needs to collect the biological characteristics of students in an
on-site manner. The recognition technologies usually require specific devices, and could
not be used in mobile devices. Moreover it requires students to line up to collect
biological information everyday, which will spend more time. In the proposed system,
the tasks functions such as location submission and crowdsensing are relatively simple.
Additionally, proposed system is involved in user privacy. Compared to the system
presented by Adeniji, proposed system can ensure the high accuracy of attendance data
through mutual verification between students. Proposed system only needs to install
application on the teachers personal computers, so the deployment cost is much lower.
In summary compared to the other systems, proposed system has the advantages of low
time consumption, low deployment cost, and is not involved in personal data. It is
suitable for attendance checking with a large number of students.
32
Smart Attendance System in Crowded Classroom Performance Analysis
A machine learning methods were used to evaluate the student's observable actions in
the classroom teaching system. The evaluation was created right after the live feed
review. Several models have been produced. Such models were tested using map to
decide which model is appropriate for object detection. The map (mean average
accuracy) is a common measure used to determine the precision of the artifacts being
observed. The experimental testing shows that model accuracy is 88.606%. Tests
indicate that this method offers reasonable pace of identification and positive outcomes
for the detecting student faces dependent on observable student actions in classroom
instruction. The suggested approach is often versatile and responsive to different
situations, since more students would be interested in greater room sizes, utilizing a
higher form of camera with certain enhancements such as IP camera for continuously
capturing images of the students, detect the faces in images and compare the detected
faces with the database. It may be used such as greater input picture measurements,
anchor box dimensions ideal for different situations and further training details.
The Facial recognition database can be improved by adding more images and variable
poses of the students to make recognition full proof. The Web based architecture can be
utilized further by maintaining the databases on a remote server and the application will
be accessible via the Internet.
34
BIBLIOGRAPHY
BIBLIOGRAPHY
1. T. Ahonen, A. Hadid, and M. Pietikinen. Face description with local binary patterns:
application to face recognition. IEEE Trans Pattern Anal Mach Intell, 28(12):2037–2041,
dec 2006.
2. B. Amos, B. Ludwiczuk, and M. Satyanarayanan. Openface: A general-purpose face
recognition library with mobile applications. Technical report, CMU-CS-16-118, CMU
School of Computer Science, 2016.
3. P. Assarasee, W. Krathu, T. Triyason, V. Vanijja, and C. Arpnikanondt. Meerkat: A
framework for developing presence monitoring software based on face recognition. In 2017
10th International Conference on Ubi-media Computing and Workshops (Ubi- Media),
pages 1–6, Aug 2017.
4. S. Baker and T. Kanade. Hallucinating faces. In Automatic Face and Gesture Recognition,
2000. Proceedings. Fourth IEEE International Conference on, pages 83–88, 2000.
5. S. Biswas, K. Bowyer, and P. Flynn. Multidimensional scaling for matching low- resolution
face images. IEEE Transactions on Pattern Analysis and Machine Intelligence,
34(10):2019– 2030, 2012.
6. A. Bulat and G. Tzimiropoulos. Super-FAN: Integrated facial landmark localization and
super-resolution of real-world low resolution faces in arbitrary poses with GANs. In 2018
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018.
7. L. Chen, R. Hu, Z. Han, Q. Li, and Z. Lu. Face super resolution based on parent patch prior
for VLQ scenarios. Multimed Tools Appl, 76(7):10231–10254, apr 2017.
8. Y. Chen, Y. Tai, X. Liu, C. Shen, and J. Yang. FSRNet: End-to-end learning face
superresolution with facial priors. In 2018 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR). IEEE, 2018.
9. S. Chintalapati and M. V. Raghunadh. Automated attendance management system based on
face recognition algorithms. In 2013 IEEE International Conference on Computational
Intelligence and Computing Research, pages 1–5. IEEE, dec 2013.
10. G. G. Chrysos and S. Zafeiriou. Deep face deblurring. In 2017 IEEE Conference on
Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2017.
11. A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath.
35
Smart Attendance System in Crowded Classroom Appendix
12. Generative adversarial networks: an overview. IEEE Signal Process Mag, 35(1):53–65,
jan 2018.
13. N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In
Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society
Conference on, volume 1, pages 886–893. IEEE, 2005.
14. C. Ding and D. Tao. Trunk-branch ensemble convolutional neural networks for video-
based face recognition. IEEE Trans Pattern Anal Mach Intell, 40(4):1002–1014, apr 2018.
15. S. Dodge and L. Karam. Understanding how image quality affects deep neural networks.
In 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX),
pages 1–6. IEEE, jun 2016.
16. ] J. Flusser, S. Farokhi, C. Hoschl, T. Suk, B. Zitova, and M. Pedone. Recognition of
images degraded by gaussian blur. IEEE Trans Image Process, dec 2015.
17. R. Fu, D. Wang, D. Li, and Z. Luo. University classroom attendance based on deep
learning. In 2017 10th International Conference on Intelligent Computation Technology
and Automation (ICICTA), pages 128–131. IEEE, oct 2017.
18. R. Gonzalez and R. Woods. Digital Image Processing. Pearson, Prentice Hall, third
edition, 2008.
19. R. Gopalan, S. Taheri, P. Turaga, and R. Chellappa. A blurrobust descriptor with
applications to face recognition. IEEE Trans Pattern Anal Mach Intell, 34(6):1220–1226,
jun 2012. seo/benefits-of-seo.php.
20. M. Vladoiu and Z. Constantinescu, “Learning during covid-19 pandemic: Online
education community, based on discord,” in 2020 19th RoEduNet Conference:
Networking in Education and Research (RoEduNet), 2020, pp. 1–6.
21. C. Marconi, C. Brovetto, I. Mendez, and M. Perera, “Learning through videoconference.
research on teaching quality,” in 2018 XIII Latin American Conference on Learning
Technologies (LACLO), 2018, pp. 37–40.
22. F. Lu, X. Chen, X. Ma, Z. Liu, and Y. Chen, “The exploration and practice of it solutions
for online classes in higher education during covid-19 pandemic,” in 2020 International
Symposium on Educational Technology (ISET), 2020, pp. 298–302.
23. J. Nainggolan, G. Christian, K. Adari, Y. Bandung, K. Mutijarsa, and L. B. Subekti,
“Design and implementation of virtual class box 5.0 for distance learning in rural areas,”
in 2016 8th International Conference on Information Technology and Electrical
Engineering (ICITEE), 2016, pp. 1–6
Appendix B : Abbreviations