DRDOreportpythonFile - PDF 1
DRDOreportpythonFile - PDF 1
on
Face Recognition & Intrusion Alert System
Submitted By
Manasvi Aggarwal
B. Tech (IVth Year)
Electronics and Communication Engineering
Bhagwan Parshuram Institute of Technology
Sec.17(opposite to Sec.11), Rohini, New Delhi
Training Period
(18 July to 10 September 2022)
1
Assessment of guide
This is to certify that the project compiled by Ms. Manasvi Aggarwal which is
entitled “Face recognition & Intrusion alert system” is an organized work
carried by her under our supervision and guidance during the period 18 July to 10
September 2022.This report of 30 pages does not contain any confidential
information pertaining to DRDO. We wish her for a bright future.
Ms. Preeti Verma, Sc ‘D’ Mr. Bhagwan Jee Mishra, Technical Officer ‘C’
Project GUIDE Project GUIDE
2
ACKNOWLEDGEMENT
Centre for Fire, Explosive and Environment Safety (CFEES) is one of the premier
establishments of Defence Research & Development Organization (DRDO). This
establishment is committed to provide the users state-of-the-art product &
services to its customers in areas of explosive, fire and environmental safety
through research and development, innovation, team work and following up the
same by continual improvement based on user’s perception.
I am highly obliged to Shri Rajiv Narang, Director CFEES, Delhi for allowing
me to associate with this esteemed establishment as a summer trainee in ‘Aircraft
protection Lab’ for 6-week period from 18 July to 10 September 2022.
I extend my heartfelt gratitude to Ms. Preeti Verma, Sc ‘D’ for her unflagging
guidance throughout the progress of this project as well as valuable
contribution in the preparation and compilation of the text. I am thankful to all
those who have helped me for the successful completion of my training at
CFEES, Delhi.
Manasvi Aggarwal
B. Tech (IVth Year)
3
INDEX
Page No.
1.0 Introduction 06
2.0 Scope of Work 07
3.0 Methodology of work 07
4.0 Hardware Description 08
5.0 Software Description 09
6.0 Face Recognition Operations 11
6.1 Face Detection 12
6.2 Face Analysis
6.3 Image to Data Conversion
6.4 Match Finding
7.0 Image Processing 17
8.0 Machine learning 19
9.0 About the project 20
9.1 Face Recognition Approach 20
9.1.1 Libraries Used 20
9.1.2 Getting the data 21
9.1.3 Find Faces Locations and Encodings 21
9.1.4 How to Find Matches 23
9.2 Sending alert WhatsApp message 24
9.2.1 Libraries Used 24
9.2.2 Sending the Message 24
9.3 Marking Attendance 25
9.4 The main Code 26
10.0 Conclusion 28
11.0 Future Scope 29
12.0 References 30
4
ABOUT CFEES
Centre for Fire, Explosive and Environment Safety (CFEES) is one of the
premier establishments of Defence Research & Development Organization
(DRDO)
Centre for Fire Explosive and Environment Safety (CFEES) comes under
the SAM (Simulation Analysis and Modelling) cluster of DRDO labs.
5
List of Figures
6
1.0 Introduction
The current technology amazes people with amazing innovations that not only
make life simple but also bearable. Face recognition has over time proven to
be the least intrusive and fastest form of biometric verification.
They can be taken even without the user’s knowledge and further can be used for
security-based applications like criminal detection, face tracking, airport
security, and forensic surveillance systems. Face recognition involves capturing
face images from a video or a surveillance camera. They are compared with the
stored database. Face recognition involves training known images, classify them
with known classes, and then they are stored in the database. When a test image
is given to the system it is classified and compared with the stored database.
7
2.0 Scope of Work
In real time the project extends its application in Defence field, employee
attendance & recognition of criminals through the face recognition. In the
modern world where security is a major concern in field of Defence and crime.
This project overcomes those obstacles in a much efficient manner.
The project uses its computer vision to recognize the known and unknown
persons and sends a fast intrusion alert message to the user where the owner is
informed of trespassing by an unknown person in a banned area through
WhatsApp. MNC & govt. agencies are now using it for attendance purposes.
Further, it extends its real time application to find a criminal in a public place by
getting the previous image records of the criminal.
In this advanced world this project is purely applicable to the place wherever
there is a camera required. So, by default this project extends its application to a
wide range of place.
8
4.0 Hardware Description
The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer
monitor or TV, and uses a standard keyboard and mouse. It is a capable little device
that enables people of all ages to explore computing, and to learn how to program in
languages like Scratch and Python. It’s capable of doing everything you’d expect a
desktop computer to do, from browsing the internet and playing high-definition video,
to making spreadsheets, word-processing, and playing games.
What’s more, the Raspberry Pi has the ability to interact with the outside world, and
has been used in a wide array of digital maker projects, from music machines and
parent detectors to weather stations and tweeting birdhouses with infra-red cameras.
We want to see the Raspberry Pi being used by kids all over the world to learn to
program and understand how computers work.
9
5.0 Software Description
➢ Python (programming language)
Rather than building all of its functionality into its core, Python was designed to be
highly extensible via modules. This compact modularity has made it particularly
popular as a means of adding programmable interfaces to existing applications. Van
Rossum's vision of a small core language with a large standard library and easily
extensible interpreter stemmed from his frustrations with ABC, which espoused the
opposite approach.
10
Python strives for a simpler, less-cluttered syntax and grammar while giving
developers a choice in their coding methodology. In contrast to Perl's "there is more
than one way to do it" motto, Python embraces a "there should be one— and preferably
only one—obvious way to do it" philosophy.
➢ PyCharm IDE
PyCharm is a dedicated Python Integrated Development Environment (IDE)
providing a wide range of essential tools for Python developers, tightly integrated to
create a convenient environment for productive Python, web, and data
science development.
2. Face Analysis: Then the photo of the face is captured and analyzed.
Most facial recognition relies on 2D images rather than 3D because
it is more convenient to match to the database. Facial recognition
software will analyze the distance between your eyes or the shape
of your cheekbones.
12
6.1 Face Detection:
Face detection is a great feature for cameras. When the camera can automatically
pick out faces, it can make sure that all the faces are in focus before it takes the
picture.
To find faces in an image, we’ll start by making our image black and white
because we don’t need color data to find faces. Our goal is to figure out how dark
the current pixel is compared to the pixels directly surrounding it. Then we want
to draw an arrow showing in which direction the image is getting darker:
13
Fig 6 – HOG Face generated pattern
Using this technique, we can now easily find faces in any image.
15
6.3 Image to Data Conversion:
What we need is a way to extract a few basic measurements from each face.
Then we could measure our unknown face the same way and find the known
face with the closest measurements. For example, we might measure the
size of each ear, the spacing between the eyes, the length of the nose, etc.
It turns out that the measurements that seem obvious to us humans (like eye
color) don’t really make sense to a computer looking at individual pixels in
an image. Researchers have discovered that the most accurate approach is to
let the computer figure out the measurements to collect itself. Deep learning
does a better job than humans at figuring out which parts of a face are
important to measure.
16
Fig 10 – Generated Equations for the known face
17
6.4 Match Finding:
This last step is actually the easiest step in the whole process. All we have to do
is find the person in our database of known people who has the closest
measurements to our test image.
18
7.0 Image Processing
Image processing by computers involves the process of Computer Vision. It deals
with the high-level understanding of digital images or videos. The requirement is
to automate tasks that the human visual systems can do. So, a computer should
be able to recognize objects such as that of a face of a human being or a lamppost
or even a statue.
Image reading
The computer reads any image as a range of values between 0 and 255. For any
color image, there are 3 primary colors – Red, green, and blue. A matrix is formed
for every primary color and later these matrices combine to provide a Pixel value
for the individual R, G, B colors. Each element of the matrices provide data about
the intensity of the brightness of the pixel.
OpenCV:
19
PIL/Pillow:
PIL stands for Python Image Library and Pillow is the friendly PIL fork by
Alex Clark and Contributors. It’s one of the powerful libraries. It supports a
wide range of image formats like PPM, JPEG, TIFF, GIF, PNG, and BMP.
It can help you perform several operations on images like rotating, resizing,
cropping, Grayscaling etc. Let’s go through some of those operations
NumPy:
With this library you can also perform simple image techniques, such as
flipping images, extracting features, and analyzing them.
Face Recognition:
Recognize and manipulate faces from Python or from the command line with
the world’s simplest face recognition library.
Mahotas:
It is a computer vision and image processing library and has more than 100functions.
Many of its algorithms are implemented in C++. Mahotas is an independent module
in itself i.e., it has minimal dependencies.
Currently, it depends only on C++ compilers for numerical computations, there
is no need for NumPy module, the compiler does all its work.
20
8.0 Machine learning
Every Machine Learning algorithm takes a dataset as input and learns from the
data it basically means to learn the algorithm from the provided input and output
as data. It identifies the patterns in the data and provides the desired algorithm.
For instance, to identify whose face is present in a given image, multiple things
can be looked at as a pattern:
• Height/width of the face.
• Height and width may not be reliable since the image could be
rescaled to a smaller face or grid. However, even after rescaling,
what remains unchanged are the ratios – the ratio of the height of the
face to the width of the face won’t change.
• Color of the face.
• Width of other parts of the face like lips, nose, etc.
There is a pattern involved – different faces have different dimensions like the
ones above. Similar faces have similar dimensions. Machine Learning algorithms
only understand numbers so it is quite challenging. This numerical representation
of a “face” (or an element in the training set) is termed as a feature vector. A
feature vector comprises of various numbers in a specific order.
As a simple example, we can map a “face” into a feature vector which can
comprise various features like:
• Height of face (cm)
• Width of the face (cm)
• Average color of face (R, G, B)
• Width of lips (cm)
• Height of nose (cm)
Essentially, given an image, we can convert them into a feature vector like:
Height of face (cm) Width of the face (cm) Average color of face (RGB)
Width of lips (cm) Height of nose (cm)
23.1 15.8 (255, 224, 189) 5.2 4.4
So, the image is now a vector that could be represented as (23.1, 15.8, 255,
224, 189, 5.2, 4.4). There could be countless other features that could be
derived from the image, for instance, hair color, facial hair, spectacles, etc.
21
Machine Learning does two major functions in face recognition technology.
These are given below:
1. Deriving the feature vector: it is difficult to manually list down all of
the features because there are just so many. A Machine Learning
algorithm can intelligently label out many of such features. For
instance, a complex feature could be the ratio of the height of the
nose and width of the forehead.
2. Matching algorithms: Once the feature vectors have been obtained, a
Machine Learning algorithm needs to match a new image with the
set of feature vectors present in the corpus.
3. Face Recognition Operations
Students get confused between Machine Learning and Artificial Intelligence, but
Machine learning, a fundamental concept of AI research since the field’s
inception, is the study of computer algorithms that improve automatically through
experience. The mathematical analysis of machine learning algorithms and their
performance is a branch of theoretical computer science known as a
computational learning theory.
22
9.0 About the project
Facial Recognition is a category of biometric software that maps an individual’s
facial features and stores the data as a face print. The software uses deep learning
algorithms to compare a live captured image to the stored face print to verify
one’s identity. Image processing and machine learning are the backbones of this
technology. Face recognition has received substantial attention from researchers
due to human activities found in various applications of security like an airport,
criminal detection, face tracking, forensic, etc. Compared to other biometric traits
like palm print, iris, fingerprint, etc., face biometrics can be non-intrusive.
23
learning. The model has an accuracy of 99.38% on the Labelled Faces in
the Wild benchmark. To install this library, you need to run the following
code in terminal:
pip3 install face_recognition
4. Cmake: is used to control the software compilation process using simple
platform and compiler independent configuration files, and generate
native make files and workspaces that can be used in the compiler
environment of your choice. The suite of CMake tools were created by
Kitware in response to the need for a powerful, cross-platform build
environment for open-source projects such as ITK and VTK. To install
this library, you need to run the following code in terminal:
Pip install cmake
5. Dlib: is a toolkit for making real world machine learning and data
analysis applications. To install this library, you need to run the
following code in terminal:
pip install dlib
6. OS: it is a module in Python provides functions for interacting with the
operating system. OS comes under Python’s standard utility modules.
This module provides a portable way of using operating system-
dependent functionality. The *os* and *os.path* modules include many
functions to interact with the file system. To install this library, you need
to run the following code in terminal:
pip install os-sys
24
9.0.3 Find Faces Locations and Encodings
In this step we will use the true functionality of the face recognition library. First,
we will find the faces in our images. This is done using HOG (Histogram of
Oriented Gradients) at the backend. Once we have the face, they are warped to
remove unwanted rotations. Then the image is feed to a pretrained neural network
that out puts 128 measurements that are unique to that particular face. The parts
that the model measures are not known as this is what the model learns by itself
when it was trained. Lucky for us all this is done is just 2 lines of code. Once we
have the face locations and the encodings, we can draw rectangles around our
faces.
Now that we have a list of images, we can iterate through those and create a
corresponding encoded list for known faces. To do this we will create a function.
As earlier we will first convert it into RGB and then find its encoding using the
face_encodings() function. Then we will append each encoding to our list.
*CODE*
def findEncodings(images):
encodeList = []
for img in images:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
encode = face_recognition.face_encodings(img)[0]
encodeList.append(encode)
return encodeList
Once we have the list of face distances, we can find the minimum one, as this
would be the best match.
matchIndex = np.argmin(faceDis)
25
Now based on the index value we can determine the name of the person and
display it on the original Image.
if faceDis[matchIndex]<0.50:
print(faceDis[matchIndex])
name = className[matchIndex].upper()
else:
name = "Unknown"
print(name)
#markAttendance(name)
sendmail(name)
read_email_from_gmail()
y1,x2,y2,x1=faceLoc
y1, x2, y2, x1 = y1*4,x2*4,y2*4,x1*4
cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
cv2.rectangle(img, (x1, y2 - 35), (x2, y2), (0, 255, 0), cv2.FILLED)
cv2.putText(img, name, (x1 + 6, y2 - 6), cv2.FONT_HERSHEY_DUPLEX, 1.0,
(255, 255, 255), 1)
All this does is to check if the distance to our min face is less than 0.5 or not. If
it’s not then this means the person is unknown so we change the name to
unknown and don’t mark the attendance.
26
9.1 WhatsApp message Approach
9.1.1 Libraries Used
def markAttendance(name):
with open('Attendance.csv','r+') as f: myDataList =
f.readlines()
27
nameList =[]
for line in myDataList:
entry = line.split(',')
nameList.append(entry[0])
if name not in nameList: now =
datetime.now()
dt_string = now.strftime("%H:%M:%S")
f.writelines(f'{name},{dt_string} \n')
➢ ATTENDANCE RECORD
25
26
9.3 The main Code
8 import numpy as np
9 import face_recognition as fr
10 import cv2
11 import pywhatkit as pwt
12 import time as time
13 video_capture = cv2.VideoCapture(0)14
15 manasvi_image = fr.load_image_file("Manasvi.jpg")
16 manasvi_face_encoding = fr.face_encodings(manasvi_image)[0]17
18 known_face_encondings = [manasvi_face_encoding]
19 known_face_names = ["Manasvi"]20
21 while True:
22 ret, frame = video_capture.read()23
24 rgb_frame = frame[:, :, ::-1]25
26 face_locations = fr.face_locations(rgb_frame)
27 face_encodings = fr.face_encodings(rgb_frame, face_locations)28
29 for (top, right, bottom, left), face_encoding in zip(face_locations,face_encodings):
30
31 matches = fr.compare_faces(known_face_encondings, face_encoding)
32 name="Unknown"
33
34 face_distances = fr.face_distance(known_face_encondings,face_encoding)
35
36 best_match_index = np.argmin(face_distances)
37 if matches[best_match_index]:
38 name = known_face_names[best_match_index]39
40 else:
41 pwt.sendwhatmsg("+919205254611", "UNKNOWN PERSON ENTERED",
int(time.strftime("%H")),int(time.strftime(("%M"))) + 1)
42 print("msg sent")43
44 cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255),2)
45 cv2.rectangle(frame, (left, bottom -35), (right, bottom), (0, 0,255), cv2.FILLED)
46 font = cv2.FONT_HERSHEY_SIMPLEX
47 cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255,255, 255), 1)
48
27
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
28
10.0 Conclusion
AI is at the center of a new enterprise to build computational models of
intelligence. The main assumption is that intelligence (human or otherwise)
can be represented in terms of symbol structures and symbolic operations
which can be programmed in a digital computer.
29
11.0 Future Scope
The scope of Artificial Intelligence in India is still in the adoption stage but
slowlyit is being used to find smart solutions to modern problems in almost all
the major sectors such as Agriculture, Healthcare, Education and Infrastructure,
Transport, Cyber Security, Banking, Manufacturing, business, Hospitality,
Entertainment.
Around the home, security systems are also turning to facial recognition to both
improve security in and around the home and to improve access and create a more
seamless experience. Especially when deployed in smart home or building
developments.
Companies like Netatmo, Netgear, Honeywell and Ooma have home security
systems with facial recognition incorporated, helping to identify people when
they arrive at your home as well as detecting potential intruders when you are
away from the home.
Honeywell has partnered with Amazon’s Alexa to offer a great solution for those
looking to create a smarter home.
Taking things one step further, Google Nest Cam IQ watches over your property
24×7 and can detect people from 50m away. The system can recognize familiar
faces and you can pre-set actions for those visitors such as opening a gate or front
door.
While facial recognition is undoubtedly changing the world in which we live, that
world is also changing the way facial recognition is being deployed around the
world.
Regarding our Project app based smart monitoring system can be developed
based on our need.
30
12.0 References
1. https://medium.com/@ageitgey/machine-learning-is-fun-part-4-
modern-face-recognition-with-deep-learning-c3cffc121d78
2. https://www.careers360.com/courses-certifications/articles/scope-of-
artificial-intelligence-in-
india#:~:text=The%20scope%20of%20Artificial%20Intelligence%20in%20Ind
ia%20is%20still%20in,%2C%20Manufacturing%2C%20business%2C%20
Hos pitality%2C
3. https://www.w3schools.com/python/default.asp
4. https://www.computervision.zone/courses/face-attendance/
5. https://www.geeksforgeeks.org/
6. https://www.python.org/
7. https://stackoverflow.com/
8. https://github.com/EbenKouao/SmartCCTV-Camera
31