0% found this document useful (0 votes)
23 views

DRDOreportpythonFile - PDF 1

Uploaded by

Ashish Jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

DRDOreportpythonFile - PDF 1

Uploaded by

Ashish Jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Summer Vocational Training Report

on
Face Recognition & Intrusion Alert System

Submitted By
Manasvi Aggarwal
B. Tech (IVth Year)
Electronics and Communication Engineering
Bhagwan Parshuram Institute of Technology
Sec.17(opposite to Sec.11), Rohini, New Delhi

Training Period
(18 July to 10 September 2022)

Under the Guidance of


Ms. Preeti Verma, Scientist ‘D’
Mr. Bhagwan Jee Mishra, Technical Officer ‘C’

Centre for Fire, Explosive and Environment Safety (CFEES)


Defence Research & Development Organization
Ministry of Defence, Government of India
Timarpur, Delhi, 110054

1
Assessment of guide
This is to certify that the project compiled by Ms. Manasvi Aggarwal which is
entitled “Face recognition & Intrusion alert system” is an organized work
carried by her under our supervision and guidance during the period 18 July to 10
September 2022.This report of 30 pages does not contain any confidential
information pertaining to DRDO. We wish her for a bright future.

Ms. Preeti Verma, Sc ‘D’ Mr. Bhagwan Jee Mishra, Technical Officer ‘C’
Project GUIDE Project GUIDE

2
ACKNOWLEDGEMENT
Centre for Fire, Explosive and Environment Safety (CFEES) is one of the premier
establishments of Defence Research & Development Organization (DRDO). This
establishment is committed to provide the users state-of-the-art product &
services to its customers in areas of explosive, fire and environmental safety
through research and development, innovation, team work and following up the
same by continual improvement based on user’s perception.

I am highly obliged to Shri Rajiv Narang, Director CFEES, Delhi for allowing
me to associate with this esteemed establishment as a summer trainee in ‘Aircraft
protection Lab’ for 6-week period from 18 July to 10 September 2022.

I extend my heartfelt gratitude to Dr. Meenakshi Gupta , Sc ‘G’ and Associate


Director (FSEG) for providing me the opportunity work in this department and learn
things related to this project.

Most importantly, I express my sincere thanks to Mr. Hemant Shukla, Sc ‘F’


for teaching important things for completion of this project.

I extend my heartfelt gratitude to Ms. Preeti Verma, Sc ‘D’ for her unflagging
guidance throughout the progress of this project as well as valuable
contribution in the preparation and compilation of the text. I am thankful to all
those who have helped me for the successful completion of my training at
CFEES, Delhi.

Further, I extend my heartfelt gratitude to Mr. Bhagwan Jee Mishra,


Technical Officer ‘C’, For entrusting me with a project on “Computer
Vision”. Working in this project has certainly been a good learning experience
and has reinforced my knowledge of smart machines to a great deal.

Manasvi Aggarwal
B. Tech (IVth Year)

3
INDEX
Page No.

1.0 Introduction 06
2.0 Scope of Work 07
3.0 Methodology of work 07
4.0 Hardware Description 08
5.0 Software Description 09
6.0 Face Recognition Operations 11
6.1 Face Detection 12
6.2 Face Analysis
6.3 Image to Data Conversion
6.4 Match Finding
7.0 Image Processing 17
8.0 Machine learning 19
9.0 About the project 20
9.1 Face Recognition Approach 20
9.1.1 Libraries Used 20
9.1.2 Getting the data 21
9.1.3 Find Faces Locations and Encodings 21
9.1.4 How to Find Matches 23
9.2 Sending alert WhatsApp message 24
9.2.1 Libraries Used 24
9.2.2 Sending the Message 24
9.3 Marking Attendance 25
9.4 The main Code 26
10.0 Conclusion 28
11.0 Future Scope 29
12.0 References 30

4
ABOUT CFEES
Centre for Fire, Explosive and Environment Safety (CFEES) is one of the
premier establishments of Defence Research & Development Organization
(DRDO)

Centre for Fire Explosive and Environment Safety (CFEES) comes under
the SAM (Simulation Analysis and Modelling) cluster of DRDO labs.

To evolve into a Centre of Excellence in the field of Fire Science &


Engineering, Explosive and Environment Safety & to provide integrated
safety advice and services to MoD establishments.

R&D in Fire Science & Engineering, Explosive and Environment Safety;


Regulatory Authority for Fire, Explosive and Environment Safety in MoD
establishments; and Nodal agency for implementation of Safety Healthy
Environment (SHE) & Disaster Management for DRDO.

CFEES strives to attain excellence in the fields of Fire, Explosive and


Environment Safety and become a leading research laboratory by
complying with the Quality Management System and to work for continual
improvement.

Defence Research and Development Organization (DRDO) is a premier


organization of the government of India, responsible for development of
technology for use by the three services of Defence in India. It was formed
in 1958 by the merger of Technical Development Establishment and the
Directorate of Technical Development and Production with Defence
Science Organization.

5
List of Figures

S.No Description Page No.


1. Methodology 8
2. Raspberry Board 9
3. Python & GUI Symbol 10
4. PyCharm IDE 11
5. Face to be detected 13
6. HOG Face generated Pattern 14
7. Sample Photo of detected face 14
8. Photos for checking accuracy 15
9. Generation of 68 distinguish face points 15
10. Generated Equations for the known face 16
11. Result for known face recognition 17
12. Unrecognized Face Detection 25
13. Attendance Record 27
14. Unknown Person Record 27
15. WhatsApp message to the User 27

6
1.0 Introduction
The current technology amazes people with amazing innovations that not only
make life simple but also bearable. Face recognition has over time proven to
be the least intrusive and fastest form of biometric verification.

Facial Recognition is a category of biometric software that maps an individual’s


facial features and stores the data as a face print. The software uses deep learning
algorithms to compare a live captured image to the stored face print to verify
one’s identity. Image processing and machine learning are the backbones of this
technology. Face recognition has received substantial attention from researchers
due to human activities found in various applications of security like an airport,
criminal detection, face tracking, forensic, etc. Compared to other biometric traits
like palm print, iris, fingerprint, etc., face biometrics can be non-intrusive.

They can be taken even without the user’s knowledge and further can be used for
security-based applications like criminal detection, face tracking, airport
security, and forensic surveillance systems. Face recognition involves capturing
face images from a video or a surveillance camera. They are compared with the
stored database. Face recognition involves training known images, classify them
with known classes, and then they are stored in the database. When a test image
is given to the system it is classified and compared with the stored database.

7
2.0 Scope of Work
In real time the project extends its application in Defence field, employee
attendance & recognition of criminals through the face recognition. In the
modern world where security is a major concern in field of Defence and crime.
This project overcomes those obstacles in a much efficient manner.

The project uses its computer vision to recognize the known and unknown
persons and sends a fast intrusion alert message to the user where the owner is
informed of trespassing by an unknown person in a banned area through
WhatsApp. MNC & govt. agencies are now using it for attendance purposes.
Further, it extends its real time application to find a criminal in a public place by
getting the previous image records of the criminal.

In this advanced world this project is purely applicable to the place wherever
there is a camera required. So, by default this project extends its application to a
wide range of place.

3.0 Methodology of work

8
4.0 Hardware Description
The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer
monitor or TV, and uses a standard keyboard and mouse. It is a capable little device
that enables people of all ages to explore computing, and to learn how to program in
languages like Scratch and Python. It’s capable of doing everything you’d expect a
desktop computer to do, from browsing the internet and playing high-definition video,
to making spreadsheets, word-processing, and playing games.

What’s more, the Raspberry Pi has the ability to interact with the outside world, and
has been used in a wide array of digital maker projects, from music machines and
parent detectors to weather stations and tweeting birdhouses with infra-red cameras.
We want to see the Raspberry Pi being used by kids all over the world to learn to
program and understand how computers work.

9
5.0 Software Description
➢ Python (programming language)

Fig 3 – Python & GUI Symbol


Python is a high-level, interpreted, general-purpose programming language. Its design
philosophy emphasizes code readability with the use of significant indentation.

Python is dynamically-typed and garbage-collected. It supports multiple programming


paradigms, including structured (particularly procedural), object- oriented and
functional programming. It is often described as a "batteries included" language due
to its comprehensive standard library. Guido van Rossum began working on Python
in the late 1980s as a successor to the ABC programming language and first released
it in 1991 as Python 0.9.0. Python 2.0 was released in 2000 and introduced new
features such as list comprehensions, cycle-detecting garbage collection, reference
counting, and Unicode support. Python 3.0, released in 2008, was a major revision that
is not completely backward-compatible with earlier versions. Python 2 was
discontinued with version 2.7.18 in 2020.

Rather than building all of its functionality into its core, Python was designed to be
highly extensible via modules. This compact modularity has made it particularly
popular as a means of adding programmable interfaces to existing applications. Van
Rossum's vision of a small core language with a large standard library and easily
extensible interpreter stemmed from his frustrations with ABC, which espoused the
opposite approach.

10
Python strives for a simpler, less-cluttered syntax and grammar while giving
developers a choice in their coding methodology. In contrast to Perl's "there is more
than one way to do it" motto, Python embraces a "there should be one— and preferably
only one—obvious way to do it" philosophy.

➢ PyCharm IDE
PyCharm is a dedicated Python Integrated Development Environment (IDE)
providing a wide range of essential tools for Python developers, tightly integrated to
create a convenient environment for productive Python, web, and data
science development.

Fig 4 - PyCharm IDE


11
6.0 Face Recognition Operations
The technology system may vary when it comes to facial recognition.
Different software applies different methods and means to achieve face
recognition. The stepwise method is as follows:
1. Face Detection: To begin with, the camera will detect and
recognize a face. The face can be best detected when the person is
looking directly at the camera as it makes it easy for facial
recognition. With the advancements in the technology, this is
improved where the face can be detected with slight variation in
their posture of face facing to the camera.

2. Face Analysis: Then the photo of the face is captured and analyzed.
Most facial recognition relies on 2D images rather than 3D because
it is more convenient to match to the database. Facial recognition
software will analyze the distance between your eyes or the shape
of your cheekbones.

3. Image to Data Conversion: Now it is converted to a mathematical


formula and these facial features become numbers. This numerical
code is known a face print. The way every person has a unique
fingerprint, in the same way, they have unique face print.

4. Match Finding: Then the code is compared against a database of


other face prints. This database has photos with identification that
can be compared. The technology then identifies a match for your
exact features in the provided database. It returns with the match and
attached information such as name and addresses or it depends on
the information saved in the database of an individual.

12
6.1 Face Detection:
Face detection is a great feature for cameras. When the camera can automatically
pick out faces, it can make sure that all the faces are in focus before it takes the
picture.

Fig 5 - Face to be detected


Face detection went mainstream in the early 2000's when Paul Viola and
Michael Jones invented a way to detect faces that was fast enough to run on
cheap cameras. However, much more reliable solutions exist now. We’re
going to use a method invented in 2005 called Histogram of Oriented
Gradients — or just HOG for short.

To find faces in an image, we’ll start by making our image black and white
because we don’t need color data to find faces. Our goal is to figure out how dark
the current pixel is compared to the pixels directly surrounding it. Then we want
to draw an arrow showing in which direction the image is getting darker:

13
Fig 6 – HOG Face generated pattern

Using this technique, we can now easily find faces in any image.

Fig 7 – Sample Photo of detected face


14
6.2 Face Analysis:
Now we have to deal with the problem that faces turned different directions
look totally different to a computer:

Fig 8 – Photos For checking accuracy


To do this, we are going to use an algorithm called face landmark
estimation. There are lots of ways to do this, but we are going to use the
approach invented in 2014 by Vahid Kazemi and Josephine Sullivan. The
basic idea is we will come up with 68 specific points (called landmarks) that
exist on every face — the top of the chin, the outside edge of each eye, the
inner edge of each eyebrow, etc. Then we will train a machine learning
algorithm to be able to find these 68 specific points on any face:

Fig 9 – Generation of 68 distinguish face points

15
6.3 Image to Data Conversion:
What we need is a way to extract a few basic measurements from each face.
Then we could measure our unknown face the same way and find the known
face with the closest measurements. For example, we might measure the
size of each ear, the spacing between the eyes, the length of the nose, etc.
It turns out that the measurements that seem obvious to us humans (like eye
color) don’t really make sense to a computer looking at individual pixels in
an image. Researchers have discovered that the most accurate approach is to
let the computer figure out the measurements to collect itself. Deep learning
does a better job than humans at figuring out which parts of a face are
important to measure.

We are going to train it to generate 128 measurements for each face.

16
Fig 10 – Generated Equations for the known face

17
6.4 Match Finding:
This last step is actually the easiest step in the whole process. All we have to do
is find the person in our database of known people who has the closest
measurements to our test image.

Fig 11 – Result for known face recognition


We can do that by using any basic machine learning classification algorithm.
We’ll use a simple linear SVM classifier, but lots of classification algorithms
could work. All we need to do is train a classifier that can take in the
measurements from a new test image and tells which known person is the closest
match. Running this classifier results the classifier as the name of the person.

18
7.0 Image Processing
Image processing by computers involves the process of Computer Vision. It deals
with the high-level understanding of digital images or videos. The requirement is
to automate tasks that the human visual systems can do. So, a computer should
be able to recognize objects such as that of a face of a human being or a lamppost
or even a statue.

Image processing tools

Image reading

The computer reads any image as a range of values between 0 and 255. For any
color image, there are 3 primary colors – Red, green, and blue. A matrix is formed
for every primary color and later these matrices combine to provide a Pixel value
for the individual R, G, B colors. Each element of the matrices provide data about
the intensity of the brightness of the pixel.

OpenCV:

It stands for Open-Source Computer Vision Library. This library consists of


around 2000+ optimized algorithms that are useful for computer vision and
machine learning. There are several ways you can use OpenCV in image
processing, a few are listed below:

• Converting images from one color space to another.


• Performing thresholding on images, like, simple thresholding, adaptive
thresholding etc.
• Smoothing of images, like, applying custom filters to images and blurring
of images.
• Performing morphological operations on images.
• Building image pyramids.

19
PIL/Pillow:

PIL stands for Python Image Library and Pillow is the friendly PIL fork by
Alex Clark and Contributors. It’s one of the powerful libraries. It supports a
wide range of image formats like PPM, JPEG, TIFF, GIF, PNG, and BMP.
It can help you perform several operations on images like rotating, resizing,
cropping, Grayscaling etc. Let’s go through some of those operations

NumPy:

With this library you can also perform simple image techniques, such as
flipping images, extracting features, and analyzing them.

Images can be represented by NumPy multi-dimensional arrays and so their type


is NdArrays. A color image is a NumPy array with 3 dimensions. By slicing the
multi-dimensional array, the RGB channels can be separated.

Face Recognition:
Recognize and manipulate faces from Python or from the command line with
the world’s simplest face recognition library.

Mahotas:

It is a computer vision and image processing library and has more than 100functions.
Many of its algorithms are implemented in C++. Mahotas is an independent module
in itself i.e., it has minimal dependencies.
Currently, it depends only on C++ compilers for numerical computations, there
is no need for NumPy module, the compiler does all its work.

20
8.0 Machine learning
Every Machine Learning algorithm takes a dataset as input and learns from the
data it basically means to learn the algorithm from the provided input and output
as data. It identifies the patterns in the data and provides the desired algorithm.
For instance, to identify whose face is present in a given image, multiple things
can be looked at as a pattern:
• Height/width of the face.
• Height and width may not be reliable since the image could be
rescaled to a smaller face or grid. However, even after rescaling,
what remains unchanged are the ratios – the ratio of the height of the
face to the width of the face won’t change.
• Color of the face.
• Width of other parts of the face like lips, nose, etc.

There is a pattern involved – different faces have different dimensions like the
ones above. Similar faces have similar dimensions. Machine Learning algorithms
only understand numbers so it is quite challenging. This numerical representation
of a “face” (or an element in the training set) is termed as a feature vector. A
feature vector comprises of various numbers in a specific order.
As a simple example, we can map a “face” into a feature vector which can
comprise various features like:
• Height of face (cm)
• Width of the face (cm)
• Average color of face (R, G, B)
• Width of lips (cm)
• Height of nose (cm)

Essentially, given an image, we can convert them into a feature vector like:
Height of face (cm) Width of the face (cm) Average color of face (RGB)
Width of lips (cm) Height of nose (cm)
23.1 15.8 (255, 224, 189) 5.2 4.4
So, the image is now a vector that could be represented as (23.1, 15.8, 255,
224, 189, 5.2, 4.4). There could be countless other features that could be
derived from the image, for instance, hair color, facial hair, spectacles, etc.

21
Machine Learning does two major functions in face recognition technology.
These are given below:
1. Deriving the feature vector: it is difficult to manually list down all of
the features because there are just so many. A Machine Learning
algorithm can intelligently label out many of such features. For
instance, a complex feature could be the ratio of the height of the
nose and width of the forehead.
2. Matching algorithms: Once the feature vectors have been obtained, a
Machine Learning algorithm needs to match a new image with the
set of feature vectors present in the corpus.
3. Face Recognition Operations

Students get confused between Machine Learning and Artificial Intelligence, but
Machine learning, a fundamental concept of AI research since the field’s
inception, is the study of computer algorithms that improve automatically through
experience. The mathematical analysis of machine learning algorithms and their
performance is a branch of theoretical computer science known as a
computational learning theory.

22
9.0 About the project
Facial Recognition is a category of biometric software that maps an individual’s
facial features and stores the data as a face print. The software uses deep learning
algorithms to compare a live captured image to the stored face print to verify
one’s identity. Image processing and machine learning are the backbones of this
technology. Face recognition has received substantial attention from researchers
due to human activities found in various applications of security like an airport,
criminal detection, face tracking, forensic, etc. Compared to other biometric traits
like palm print, iris, fingerprint, etc., face biometrics can be non-intrusive.

9.0 Face Recognition Approach


9.0.1 Libraries Used
For our project we have used various python libraries provided by python.
Libraries used in the project are:
1. OpenCV: It is a huge open-source library for computer vision, machine
learning, and image processing. OpenCV supports a wide variety of
programming languages like Python, C++, Java, etc. It can process images
and videos to identify objects, faces, or even the handwriting ofa human.
When it is integrated with various libraries, such as NumPy which is a
highly optimized library for numerical operations, then the number of
weapons increases in your Arsenal i.e. whatever operations one can do in
NumPy can be combined with OpenCV. To install this library, you need
to run the following code in terminal:
pip install OpenCV-python

2. NumPy: It is a general-purpose array-processing package. It provides a


high-performance multidimensional array object, and tools for working
with these arrays. It is the fundamental package for scientific computing
with Python. It is open-source software. To install this library, you need
to run the following code in terminal:
pip install numpy

3. Face_recognition: Recognize and manipulate faces from Python or from


the command line with the world's simplest face recognition library. Built
using dlib's state-of-the-art face recognition built with deep

23
learning. The model has an accuracy of 99.38% on the Labelled Faces in
the Wild benchmark. To install this library, you need to run the following
code in terminal:
pip3 install face_recognition
4. Cmake: is used to control the software compilation process using simple
platform and compiler independent configuration files, and generate
native make files and workspaces that can be used in the compiler
environment of your choice. The suite of CMake tools were created by
Kitware in response to the need for a powerful, cross-platform build
environment for open-source projects such as ITK and VTK. To install
this library, you need to run the following code in terminal:
Pip install cmake
5. Dlib: is a toolkit for making real world machine learning and data
analysis applications. To install this library, you need to run the
following code in terminal:
pip install dlib
6. OS: it is a module in Python provides functions for interacting with the
operating system. OS comes under Python’s standard utility modules.
This module provides a portable way of using operating system-
dependent functionality. The *os* and *os.path* modules include many
functions to interact with the file system. To install this library, you need
to run the following code in terminal:
pip install os-sys

9.0.2 Getting the data


I have made a folder called ImagesAttendance. In this you can add all the images
which you need to add in the data set. From this we can extract the number of data set/
classes which are provided to us.
*CODE*
path = 'ImagesAttendance'
images = [] # LIST CONTAINING ALL THE IMAGES
className = [] # LIST CONTAINING ALL THE CORRESPONDING CLASS Names
myList = os.listdir(path)
print("Total Classes Detected:",len(myList))

for x,cl in enumerate(myList):


curImg = cv2.imread(f'{path}/{cl}')
images.append(curImg)
className.append(os.path.splitext(cl)[0])

24
9.0.3 Find Faces Locations and Encodings
In this step we will use the true functionality of the face recognition library. First,
we will find the faces in our images. This is done using HOG (Histogram of
Oriented Gradients) at the backend. Once we have the face, they are warped to
remove unwanted rotations. Then the image is feed to a pretrained neural network
that out puts 128 measurements that are unique to that particular face. The parts
that the model measures are not known as this is what the model learns by itself
when it was trained. Lucky for us all this is done is just 2 lines of code. Once we
have the face locations and the encodings, we can draw rectangles around our
faces.
Now that we have a list of images, we can iterate through those and create a
corresponding encoded list for known faces. To do this we will create a function.
As earlier we will first convert it into RGB and then find its encoding using the
face_encodings() function. Then we will append each encoding to our list.
*CODE*
def findEncodings(images):
encodeList = []
for img in images:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
encode = face_recognition.face_encodings(img)[0]
encodeList.append(encode)
return encodeList

9.0.4 How to Find Matches


Now we can match the current face encodings to our known faces encoding list
to find the matches. We will also compute the distance. This is done to find the
best match in case more than one face is detected at a time
for encodeFace,faceLoc in zip(encodesCurFrame,facesCurFrame):
matches = face_recognition.compare_faces(encodeListKnown, encodeFace)
faceDis = face_recognition.face_distance(encodeListKnown, encodeFace)

Once we have the list of face distances, we can find the minimum one, as this
would be the best match.
matchIndex = np.argmin(faceDis)

25
Now based on the index value we can determine the name of the person and
display it on the original Image.
if faceDis[matchIndex]<0.50:
print(faceDis[matchIndex])
name = className[matchIndex].upper()
else:
name = "Unknown"
print(name)
#markAttendance(name)
sendmail(name)
read_email_from_gmail()
y1,x2,y2,x1=faceLoc
y1, x2, y2, x1 = y1*4,x2*4,y2*4,x1*4
cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
cv2.rectangle(img, (x1, y2 - 35), (x2, y2), (0, 255, 0), cv2.FILLED)
cv2.putText(img, name, (x1 + 6, y2 - 6), cv2.FONT_HERSHEY_DUPLEX, 1.0,
(255, 255, 255), 1)

All this does is to check if the distance to our min face is less than 0.5 or not. If
it’s not then this means the person is unknown so we change the name to
unknown and don’t mark the attendance.

Fig 12 – Unrecognized Face Detection

26
9.1 WhatsApp message Approach
9.1.1 Libraries Used

1. ) Pywhatkit- Pywhatkit is a Python library for sending WhatsApp messages at


a certain time, it has several other features too.
Following are some features of pywhatkit module:
1. Send WhatsApp messages.
2. Play a YouTube video.
3. Perform a Google Search.
4. Get information on a particular topic.

9.1.2 Sending the WhatsApp Message


For this we write a program pywhatkit which we can import in the main function.
else:
pwt.sendwhatmsg("+919205254611", "UNKNOWN PERSON ENTERED",
int(time.strftime("%H")),int(time.strftime(("%M"))) + 1)
print("msg sent")
If the face is of unknown person a message is sent to the user by whatsapp.
Message confirmation :

9.2 Marking Attendance


We are going to add the automated attendance code. We will start by writing a
function that requires only one input which is the name of the user. First, we open
our Attendance file which is in csv format. Then we read all the lines and iterate
through each line using a for loop. Next, we can split using comma ‘,’.
This will allow us to get the first element which is the name of the user. If the
user in the camera already has an entry in the file, then nothing will happen. On
the other hand, if the user is new then the name of the user along with the current
time stamp will be stored. We can use the datetime class in the date time package
to get the current time.

def markAttendance(name):
with open('Attendance.csv','r+') as f: myDataList =
f.readlines()
27
nameList =[]
for line in myDataList:
entry = line.split(',')
nameList.append(entry[0])
if name not in nameList: now =
datetime.now()
dt_string = now.strftime("%H:%M:%S")
f.writelines(f'{name},{dt_string} \n')

➢ ATTENDANCE RECORD

➢ Unknown person records

➢ Received Message to the User

25
26
9.3 The main Code
8 import numpy as np
9 import face_recognition as fr
10 import cv2
11 import pywhatkit as pwt
12 import time as time
13 video_capture = cv2.VideoCapture(0)14
15 manasvi_image = fr.load_image_file("Manasvi.jpg")
16 manasvi_face_encoding = fr.face_encodings(manasvi_image)[0]17
18 known_face_encondings = [manasvi_face_encoding]
19 known_face_names = ["Manasvi"]20
21 while True:
22 ret, frame = video_capture.read()23
24 rgb_frame = frame[:, :, ::-1]25
26 face_locations = fr.face_locations(rgb_frame)
27 face_encodings = fr.face_encodings(rgb_frame, face_locations)28
29 for (top, right, bottom, left), face_encoding in zip(face_locations,face_encodings):
30
31 matches = fr.compare_faces(known_face_encondings, face_encoding)
32 name="Unknown"
33
34 face_distances = fr.face_distance(known_face_encondings,face_encoding)
35
36 best_match_index = np.argmin(face_distances)
37 if matches[best_match_index]:
38 name = known_face_names[best_match_index]39
40 else:
41 pwt.sendwhatmsg("+919205254611", "UNKNOWN PERSON ENTERED",
int(time.strftime("%H")),int(time.strftime(("%M"))) + 1)
42 print("msg sent")43
44 cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255),2)
45 cv2.rectangle(frame, (left, bottom -35), (right, bottom), (0, 0,255), cv2.FILLED)
46 font = cv2.FONT_HERSHEY_SIMPLEX
47 cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255,255, 255), 1)
48

27
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67

28
10.0 Conclusion
AI is at the center of a new enterprise to build computational models of
intelligence. The main assumption is that intelligence (human or otherwise)
can be represented in terms of symbol structures and symbolic operations
which can be programmed in a digital computer.

There is much debate as to whether such an appropriately programmed


computer would be a mind, or would merely simulate one, but AI researchers
need not wait for the conclusion to that debate, nor for the hypothetical
computer that could model all the human intelligence. Aspects of intelligent
behavior, such as solving problems, making inferences, learning, and
understanding language, have already been coded as computer programs, and
within very limited domains, such as identifying diseases of soybean plants,
AI programs can outperform human experts.

Now the great challenge of AI is to find ways of representing the common-


sense knowledge and experience that enable people to carry out everyday
activities such as holding a wide-ranging conversation or finding their way
along a busy street. Conventional digital computers may be capable of running
such programs, or we may need to develop new machines that can support the
complexity of human thought.

With the use of AI solution of many complex problems can be found.

Python is a high-level, interpreted, general-purpose programming language.


Which was used to find solution to our problem.

29
11.0 Future Scope
The scope of Artificial Intelligence in India is still in the adoption stage but
slowlyit is being used to find smart solutions to modern problems in almost all
the major sectors such as Agriculture, Healthcare, Education and Infrastructure,
Transport, Cyber Security, Banking, Manufacturing, business, Hospitality,
Entertainment.

Facial recognition solutions are expected to be present in 1.3 billion devices by


2024. Powered by AI, facial recognition software in mobile phones is already
being used by companies like iProov and Mastercard to authenticate payments
and other high-end authentication tasks. Such uses will increase as we move into
2022 and beyond.

Around the home, security systems are also turning to facial recognition to both
improve security in and around the home and to improve access and create a more
seamless experience. Especially when deployed in smart home or building
developments.

Companies like Netatmo, Netgear, Honeywell and Ooma have home security
systems with facial recognition incorporated, helping to identify people when
they arrive at your home as well as detecting potential intruders when you are
away from the home.

Honeywell has partnered with Amazon’s Alexa to offer a great solution for those
looking to create a smarter home.

Taking things one step further, Google Nest Cam IQ watches over your property
24×7 and can detect people from 50m away. The system can recognize familiar
faces and you can pre-set actions for those visitors such as opening a gate or front
door.

While facial recognition is undoubtedly changing the world in which we live, that
world is also changing the way facial recognition is being deployed around the
world.

Regarding our Project app based smart monitoring system can be developed
based on our need.

30
12.0 References
1. https://medium.com/@ageitgey/machine-learning-is-fun-part-4-
modern-face-recognition-with-deep-learning-c3cffc121d78

2. https://www.careers360.com/courses-certifications/articles/scope-of-
artificial-intelligence-in-
india#:~:text=The%20scope%20of%20Artificial%20Intelligence%20in%20Ind
ia%20is%20still%20in,%2C%20Manufacturing%2C%20business%2C%20
Hos pitality%2C

3. https://www.w3schools.com/python/default.asp

4. https://www.computervision.zone/courses/face-attendance/

5. https://www.geeksforgeeks.org/

6. https://www.python.org/

7. https://stackoverflow.com/

8. https://github.com/EbenKouao/SmartCCTV-Camera

31

You might also like