Team 11 Project
Team 11 Project
Team 11 Project
PROJECT REPORT
ON
ANTISLEEP ALARAMS FOR DRIVERS
Submitted in partial fulfilment of the requirement for the award
of the degree of
BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
BY
Ch.Akhil Reddy(22p61a1226)
B.Shashi VishwaMedha(22p61a1213)
Mr. P. Shivakumar
This is to certify that the project entitled “ANTISLEEP ALARAMS FOR DRIVERS” is
being submitted by Ch.AkhilReddy(22p61a1226),B.Shashi
VishwaMedha(22p61a1213),T.Pavan sai(23p65a1205)in partial fulfilment of the
requirement for the award of the degree of Bachelor of Technology in Information
Technology is a record of bonafide work carried out by them under my guidance and
supervision during the academic year 2023-2024. The results embodied in this project report
have not been submitted to any other University for the award of any degree or diploma.
Internal Guide
Mr. P. SHIVAKUMAR Head of the Department
Asst Professor Dr. K. Kalaivani
Department of IT Department of IT
Project coordinator
External Examiner
Mr.MD.Imtiaz Ali
Asst Professor Name:
Department of IT College:
AUSHAPUR (V), GHATKESAR (M), MEDCHAL.DIST-501 301
This is a record of bonafide work carried out by us and the results embodied in this project
have not been reproduced or copied from any source. The results embodied in this project
report have not been submitted to any other university or institute for the award of any other
degree or diploma.
CH.AKHIL REDDY(22P61A1226)
B.SHASHI VISHWAMEDHA (22P61A1213)
T. PAVAN SAI(23P61A1205)
ACKNOWLEDGEMENT
First and foremost, we wish to express our gratitude towards the institution “Vignana
Bharathi Institute of Technology” for fulfilling the most cherished goal of our life to do
Bachelor of Technology.
It is great pleasure in expressing deep sense of gratitude to our Internal guide, Mr. P.
Shivakumar, Asst. Professor, Department of Information Technology for his valuable
guidance and freedom he gave to us.
We also express our sincere thanks to Mr. MD.Imtiaz Ali, B. Tech Coordinator for
hi encouragement and support throughout the project.
Our outmost thanks also go to all the FACULTY MEMBERS and NON-
TEACHING STAFF of the Department of Information Technology for their support
throughout our project work
CH.AKHIL REDDY(22P61A1226)
T. PAVAN SAI(23P61A1205)
VIGNANA BHARATHI INSTITUTE OF TECHNOLOGY
COURSE OUTCOMES
AY: 2023-2024
Course Outcomes
C424.2 Analyze the existing system, and outlining the proposed Analyze
methodology for effective solution
C424.3 Use various modern tools for designing applications based on Apply
specified requirements
C424.4 Develop applications with adequate features and evaluate the Create
application to ensure the quality
C424.5 Prepare the document of the project as per the guidelines Create
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
PROGRAM SPECIFIC OUTCOMES (PSO’s)
PSO1 Simulate computer hardware and apply software engineering principles and techniques
to develop various IT applications
PSO2 Analyze various networking concepts and also aware of how security policies,
standards and practices are used for trouble-shooting.
PSO3 Design and maintain database for providing back-end support to software projects.
PSO4 Apply algorithms and programming paradigms to produce IT based solutions for the
real-world problems.
VIGNANA BHARATHI INSTITUTE OF TECHNOLOGY
Note: Write your domain name in technology field (ex. ML, IOT, BC, Security, Cloud etc)
C424.1 3 3 2 3 3 2 - - 3 3 3 3 2 - - 3
C424.2 2 2 3 2 3 2 - - 3 3 3 3 2 - - 3
C424.3 2 2 3 2 3 2 - - 3 3 3 3 3 - - 3
C424.4 2 2 3 2 3 2 - - 3 3 3 3 2 - - 3
C424.5 2 2 2 2 3 2 - - 3 3 3 3 2 - - 2
AY: 2023-2024
Mapped POs:
PO 1 Able to attains a basic knowledge and engineering fundamentals to identify and state
the problem.
PO 2 Able to analyze complex problems to develop solutions for detecting Noisy data in
satellite images.
PO 3 Able to design solutions for complex problem and design software components,
process to meet specifications.
PO 4 Able to analyze complex problems which are faced in detecting Noisy data and
developing an application which reduces the complexity and improves the efficiency
and reliability.
PO 5 Able to develop web applications by using Integrated Modern Tools using Flask
P0 6 Able to develop web application which helps and minimizes the problems faced by
the Image detection specialist.
PO 7 -----
P0 8 -----
P010 Able to work effectively as they communicate with each other while developing
their project
PO11 Able to apply principles and techniques which are used in our application to
integrate into new application.
PO12 Able to engage by learning emerging technologies which helps in developing user
friendly application.
Mapped PSOs:
PSO1 Able to apply software engineering principles and techniques to develop the
web application of automate quality certification of remote sensing satellite
images.
PSO2 -----
PSO 3 -----
Supervisor Signature
ABSTARCT
The Anti Sleep alarm for Drivers is a crucial safety tool for preventing drowsy driving
accidents. This device uses advanced sensors to monitor a driver's condition, detecting early
signs of fatigue through head position and eye movement changes. In modern-times, owing to
hectic schedules it becomes very difficult to remain active all the time. Imagine a situation
where a person is driving home from work, dead tired after facing all the challenges of the
day. The hands are on the wheel and foot on the pedal but suddenly started feeling drowsy,
the eyes start shutting and the vision blurs and before it knew, then the person fall asleep.
Falling asleep on the wheel can lead to serious consequences, there may be accidents and
people may even lose their lives. This situation is much more common and hence, it is very
important to counter this problem. So to address this issue, the Project Anti-Sleep Alarm for
Drivers is introduced
INDEX
CONTENT PAGE NUMBER
Certification
Acknowledgement
Abstract
List of Figures
1. INTRODUCTION 1
1.1 Motivation 2
1.1.1 Overview of Existing System 2
1.1.2 Overview of Proposed System 2
1.1.3 System Features 3
1.2 Problem definition 3
1.3 Objective of Project 3
1.4 Scope of Project 4
2. LITERATURE SURVEY 5
3. SYSTEM ANALYSIS 10
3.1 System architecture 10
3.1.1 Architecture Diagram 10
3.2 Description of components 11
3.2.1 Convolutional Neural Network (CNN) 11
3.2.2 Data Augmentation 12
3.2.3 Vgg16 Architecture 13
3.2.4 Flask Web Application 14
3.3 Operating Requirements 15
4. SYSTEM DESIGN 4.1 16
UML diagrams 16
5. IMPLEMENTATION 24
5.1 Sample code 24
5.1.1 CNN Model Creation 24
5.1.2 Backend (flask) 26
6. OUTPUT SCREENS 29
7. TESTING & DEBUGGING 33
8. CONCLUSIONS 39
9. FUTURE ENHANCEMENTS 40
10. REFERENCES 42
LIST OF FIGURES
Fig 3.1.1: Architecture Diagram 10
Fig 4.1: Use case diagram 17
Fig 4.2: Class diagram 18
Fig 4.3: Sequence diagram 19
Fig 4.4: Activity diagram 20
Fig 4.5: Collaboration diagram 21
Fig 4.6: Component diagram 22
Fig 4.7: Deployment diagram 24
Fig 6.1: Home page 29
Fig 6.2: Selecting Image 30
Fig 6.3: Stripe Noise Image Uploaded 30
Fig 6.4: Salt and Pepper Noise Image Uploaded 31
Fig 6.5: Data Loss Noise Image Uploaded 31
Fig 6.6: Stripe Noise detected 32
Fig 6.7: No Noise detected 32
LIST OF TABLES
Table 7.1: Test cases for system 37
CHAPTER 01
INTRODUCTION
1 INTRODUCTION
The system uses an IR sensor to detect a driver's eye blinks and a
microcontroller to process the sensor data. If no eye blinks are detected for a
period of time, indicating potential drowsiness, the system will stop the vehicle
and trigger an alarm to prevent accidents.Through this exploration, we aim to
highlight the critical role of technology in promoting safer driving practices and
fostering a culture of responsibility behind the wheel. By understanding the
benefits and limitations of anti-sleep alarms,we can work towards creating a
safer environments for all road users.
1.1 MOTIVATION
• Safety
• Reduced Risk
• Adjustable Sensitivity
1.1.3 SYSTEM FEATURES
• Drowsiness detection
• Low Power consumption
• Real-time Monitoring
The Anti Sleep alarm for Drivers is a crucial safety tool for preventing drowsy
driving accidents. This device uses advanced sensors to monitor a driver's
condition, detecting early signs of fatigue through head position and eye
movement changes. the drowsiness detection system is to aid in the prevention
of accidents passenger and commercial vehicles. The system will detect the
early symptoms of drowsiness before the driver has fully lost all attentiveness
and warn the driver that they are no longer capable of operating the vehicle
safely. This system alerts the Person falls asleep at the wheel thereby, avoiding
accidents and saving lives. This system is useful especially for people who
travel long distances and people who are driving late at night. The circuit is built
using Arduino Nano, a switch, a Piezo buzzer, Micro Vibration Motor and an
Eye blink sensor. Whenever the driver feels sleepy and asleep the eye blink
sensor detects and the buzzer turn ON with a sound of an intermediate beep.
When driver comes back to his normal State eye blink sensor senses that and
buzzer turns OFF.
2
1.3 OBJECTIVE OF PROJECT
• To alert drowsy drivers and stop the vehicle to prevent accidents caused by driver
fatigueCreate a responsive and informative user interface to display certification
results, enabling users to make informed decisions based on the image quality
assessment.
The scope of the problem of driver fatigue can be quite broad, as it can affect
drivers of all types of vehicles and in a variety of settings. Some of the key
factors that can contribute to driver fatigue include: Length of time spent
driving: The longer a person drives, the more likely they are to experience
fatigue. This is especially true if the trip involves long stretches of monotonous
driving or if the driver has been awake for an extended period of time.
The scope of a driver anti-sleep alarm system project would depend on the
specific goals and objectives of the project. Some projects may focus on
addressing one or more of the factors listed above, while others may take a more
comprehensive approach to addressing driver fatigue. The scope of the project
could also vary based on the target audience, such as whether it is designed for
commercial truck drivers, long-haul drivers, or everyday commuters.
3
CHAPTER 02
LITERATURE SURVEY
2 LITERATURE SURVEY
SURVEY 1:
SURVEY 2:
sleep alarms. The paper also discusses the challenges and future
5
6
SURVEY 3:
SURVEY 4:
systems. It examines the pros and cons of each method and discusses
7
SURVEY 5:
8
CHAPTER 03
SYSTEM ANALYSIS
3 SYSTEM ANALYSIS
An anti-sleep alarm system for drivers is designed to detect signs of drowsiness and alert the
driver to prevent accidents. The architecture of such a system typically comprises several key
components, each playing a crucial role in ensuring the system’s effectiveness and reliability
The foundation of the system lies in various types of sensors. Physiological sensors, such as
heart rate monitors and EEG (electroencephalogram) devices, monitor the driver’s vital signs
to detect early signs of fatigue. Behavioral sensors, including cameras, track eye movements,
head position, and facial expressions to identify signs of drowsiness. Additionally, vehicle-
based sensors monitor driving patterns, such as steering behavior, lane deviations, and speed
fluctuations, which can indicate a loss of concentration or fatigue.
10
3.2 DESCRIPTION OF COMPONENTS
Using Convolutional Neural Networks (CNNs) for an anti-sleep alarm system for drivers
involves a sophisticated approach to detect drowsiness based on visual and sensor data.
CNNs are particularly well-suited for this application due to their powerful ability to analyze
image data, making them ideal for processing video streams from in-car cameras to monitor
the driver’s face and eye movements.
The system architecture can be broken down into several stages. Firstly, the sensor module
includes a camera positioned to capture the driver’s facial features, focusing particularly on
the eyes. This camera continuously records video footage, which serves as the primary input
for the CNN.
In the data acquisition stage, video frames are extracted and preprocessed. Preprocessing
involves converting the frames into grayscale to reduce computational complexity,
normalizing pixel values, and possibly performing face and eye detection to focus the
analysis on relevant regions of interest.
The preprocessed frames are then fed into the CNN. The CNN typically consists of several
convolutional layers, which apply various filters to the input images to detect features such as
edges, corners, and textures. These layers are followed by pooling layers that reduce the
spatial dimensions of the data, preserving important features while reducing the
computational load. Multiple convolutional and pooling layers are stacked to create a deep
network capable of learning complex patterns associated with drowsiness.
After the convolutional and pooling layers, the data is passed through fully connected layers,
which interpret the high-level features extracted by the convolutional layers. These layers
output a prediction of the driver’s state—whether they are alert or drowsy. The network is
trained using a labeled dataset containing examples of both alert and drowsy drivers, allowing
it to learn the distinguishing features of each state.
To enhance the accuracy of the system, the CNN can be complemented with additional data
from other sensors, such as heart rate monitors and steering behavior sensors. This
multimodal approach allows the system to cross-reference visual cues with physiological and
behavioral data, improving the reliability of drowsiness detection.
11
3.2.2 DATA AUGMENTATION
In the context of anti-sleep alarms, data augmentation primarily involves manipulating video
frames of the driver’s face. Techniques such as rotating, translating, scaling, and flipping
images introduce variability in facial orientations and positions. Adjusting brightness and
contrast helps simulate different lighting conditions, while adding noise makes the model
more resilient to visual disturbances.
Synthetic data generation, such as using Generative Adversarial Networks (GANs), creates
realistic variations of drowsy and alert faces, especially useful when the original dataset is
limited. Temporal augmentation manipulates sequences of frames by dropping frames, time
warping, or shuffling frames within a small temporal window, which helps the model learn
temporal patterns associated with drowsiness.
Synthetic pose generation creates images of the driver’s face from different angles and poses,
simulating various head movements and orientations. Additionally, if the system uses other
sensors like heart rate monitors, augmenting data from these sources by adding noise or
shifting the timing of signals can improve robustness to sensor inaccuracies.
Overall, data augmentation enhances the training dataset’s diversity, helping the model to
better recognize drowsiness in various real-world conditions.
12
3.3 OPERATING REQUIREMENTS
HARDWARE REQUIREMENTS
1.Arduino Microcontroller Board: The Arduino will serve as the central processing unit for
your system. It will receive input from the eye blink sensor, process it, and control the
alarm system.
2.Eye Blink Sensor: This sensor will detect the driver's eye blinks. You can use an IR sensor
or a camera-based system to monitor eye movements and blinks. When the sensor detects
prolonged eye closure or frequent blinks, it indicates drowsiness.
3.DC Motor: The DC motor can be used to create a vibration mechanism. When triggered by
the Arduino, it can vibrate the seat or the steering wheel to alert the driver.
4.Relay 5V: The relay will act as a switch controlled by the Arduino. It will be used to turn
the DC motor on or off based on the input from the eye blink sensor.
5.Switch: You can use a switch to manually activate or deactivate the anti-sleep alarm
system. This allows the driver to control the system based on their preferences.
SOFTWARE REQUIREMENTS
13
CHAPTER 04
SYSTEM DESIGN
4 SYSTEM DESIGN
The goal is for UML to become a common language for creating models of object-oriented
computer software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be added to; or
associated with, UML.
The UML represents a collection of best engineering practices that have proven successful in
the modelling of large and complex systems.
The UML is a very important part of developing objects-oriented software and the software
development process. The UML uses mostly graphical notations to express the design of
software projects.
GOALS
i. The Primary goals in the design of the UML are as follows: ii. Provide users a ready-to-
use, expressive visual modelling Language so that they can develop and exchange
meaningful models.
iii. Provide extendibility and specialization mechanisms to extend the core concepts. iv.
Be independent of particular programming languages and development process.
v. Provide a formal basis for understanding the modelling language.
vi. Encourage the growth of OO tools market. vii. Support higher level development
concepts such as collaborations, frameworks, patterns and components.
16
TYPES OF UML DIAGRAM
Each UML diagram is designed to let developers and customers view a software system from
a different perspective and in varying degrees of abstraction. UML diagrams commonly
created in visual modelling tools include:
A. USECASE DIAGRAM
A use case diagram is a graphical depiction that showcases the dynamic relationships between
actors, which are external entities like users or other systems, and use cases, representing
specific functionalities or scenarios within a software system. Actors trigger and engage in use
cases, elucidating how users or external systems interact with the system's features. This
visual representation offers a high-level, yet comprehensive, view of the system's functionality
and its external interfaces. By presenting these interactions visually, use case diagrams
promote a shared understanding among stakeholders, fostering effective communication, and
aiding in the design and development of the software system.
17
B. CLASS DIAGRAM
Class diagrams are widely used to describe the types of objects in a system and their
relationships. Class diagrams model class structure and contents using design elements such
as classes, packages and objects. Class diagrams describe three different perspectives when
designing a system, conceptual, specification, and implementation. These perspectives
become evident as the diagram is created and help solidify the design.
One of the core purposes of class diagrams is to help conceptualize, specify, and implement
the design of a system. They serve as a bridge between abstract, high-level design concepts
and the tangible implementation details. Class diagrams are instrumental in conveying the
system's blueprint and ensuring that all parties involved have a shared understanding of its
architecture.
The essential components of a class within a class diagram consist of its "name," "attributes,"
and "operations." The "name" serves as a unique identifier for the class and often reflects the
real-world entity it models. "Attributes" define the properties or characteristics of objects
belonging to the class, encapsulating their state. "Operations" specify the methods or functions
that can be invoked on objects of the class, effectively defining their behaviors and
functionality.
18
C. SEQUENCE DIAGRAM
Sequence diagrams not only feature objects but also often include actors, which represent
external entities such as users or external systems interacting with the software. These actors,
typically portrayed as stick figures, are essential for illustrating how the system responds to
external stimuli and user actions.
19
Fig 4.3: Sequence diagram
D. ACTIVITY DIAGRAM
Activity Diagrams describe how activities are coordinated to provide a service which can be at
different levels of abstraction. Typically, an event needs to be achieved by some operations,
particularly where the operation is intended to achieve a number of different things that
require coordination, or how the events in a single use case relate to one another, in particular,
use cases where activities may overlap and require coordination.
20
Fig 4.4: Activity diagram
E. COLLABORATION DIAGRAM
These diagrams are invaluable for visualizing the dynamic behavior of a system, shedding
light on the intricate sequence of messages exchanged between objects during runtime. By
emphasizing communication and interaction patterns, collaboration diagrams greatly aid
stakeholders in gaining a comprehensive understanding of a system's runtime behavior,
facilitating effective system design, analysis, and communication among development teams
and project stakeholders.
21
Fig 4.5: Collaboration Diagram
F. COMPONENT DIAGRAM
By graphically depicting how these elements collaborate and communicate to fulfill specific
functionalities, the component diagram offers valuable insights for design and architectural
discussions. It plays a pivotal role in identifying dependencies, both logical and physical,
between different parts of the system. This understanding of the system's overall structure aids
in effective communication among development teams and project stakeholders. Additionally,
it serves as a crucial tool for software architects and designers in creating robust and
wellorganized systems.
22
G. DEPLOYMENT DIAGRAM
The deployment diagram visualizes the physical hardware on which the software will be
deployed. It portrays the static deployment view of a system. It involves the nodes and their
relationships.
It ascertains how software is deployed on the hardware. It maps the software architecture
created in design to the physical system architecture, where the software will be executed as a
node. Since it involves many nodes, the relationship is shown by utilizing communication
paths.
23
CHAPTER 05
IMPLEMENTATION
5 IMPLEMENTATION
24
validation_generator = validation_datagen.flow_from_directory(validation_dir,
target_size=(224, 224), batch_size=30,
class_mode='categorical')
# Load pre-trained VGG16 model and remove the top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224,
3))
25
5.1.2 Backend (flask) from flask import Flask,
static_folder='static')
= os.path.join(current_dir, "vgg16_model.h5")
keras.models.load_model(h5_file_path)
request.files:
uploaded_file = request.files['file']
26
= cv2.imdecode(np_arr,
cv2.IMREAD_COLOR)
image.resize((224, 224)) # Adjust the size to match your model's input requirements
image = np.array(image) image = image / 255.0 # Normalize the image pixel values (if
required)
print(type(image)) predictions =
model.predict(image)
class: {predicted_class}")
return jsonify(res)
return render_template("index.html")
27
@app.route('/close_modal', methods=['POST']) def
close_modal():
return ''
if __name__ == '__main__':
app.run(debug=True,host="0.0.0.0",port=5000)
28
CHAPTER 06
OUTPUT SCREENS
6 OUTPUT SCREENS
The output of the automate quality certification of remote sensing satellite images represents a
website where the user can upload the remote sensing satellite images and the CNN model
predicts weather the image contains noisy data or not.
HOME PAGE
The below image shows the uses interface of the website for detecting the noisy data in the
image. On the website there is a option to upload a image and upon clicking the “Detect”
button the result is displayed.
Fig 6.1: Home page
UPLOADING IMAGE
By clicking on the “Choose File” a file manager window will appear from which the user can
upload the remote sensing satellite images. Upon selecting the image, the file name as well as
the preview of the image is displayed on the home page.
29
Fig 6.2: Selecting Image
In the below image the image with ‘Stripe Noise’ is selected for detection.
In the below image the image with ‘Salt and Pepper Noise’ is selected for detection.
30
Fig 6.4: Salt and Pepper Noise Image Uploaded
In the below image the image with ‘Data Loss Noise’ is selected for detection.
31
GENERATING RESULT
By clicking on the “Detect” button the result is been generated and it is displayed on the
screen.
If a noise is been detected then the name of noise data in displayed along with its accuracy.
In the below image a satellite image with stripe noise is uploaded and the result is displayed
showing “Stripe Noise Detected with Accuracy: 99%”.
In the below image a satellite image with No noise is uploaded and the result is displayed
showing “No Noise Detected with Accuracy: 99%”.
32
Fig 6.7: No Noise detected
33
CHAPTER 07
TESTING AND DEBUGGING
The ultimate aim of testing is to ensure that the software system aligns perfectly with its
predefined requirements and user expectations. This alignment is critical in preventing
catastrophic failures or any form of unacceptable behaviour in real-world scenarios. To cater
to the diverse needs of software quality assurance and validation, various types of testing are
employed, providing a multifaceted approach to uncovering issues.
In essence, testing acts as a crucial safeguard against potential software shortcomings and
deficiencies that could impact the user experience or even the overall integrity of the software
system. It helps developers identify and rectify problems, ensuring that the final product
meets the highest standards of quality and reliability.
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.
Integration testing is a crucial phase in software testing that assesses the interactions and
interoperability of various software components when integrated into a unified system. Its
primary objective is to ensure that these components, which may have been individually
tested and validated (in unit testing), function cohesively as a whole. Integration tests
examine how different parts of the software communicate, share data, and collaborate to
deliver intended functionality. This testing approach identifies issues such as data flow
problems, communication errors, and inconsistencies in the software's behaviour during
component integration. It helps uncover defects that might not be apparent in isolation but
can surface when components interact, potentially causing system failures or unexpected
33
behaviour. Integration testing can take several forms, including top-down, bottom-up, and
incremental approaches, each focusing on different aspects of component integration.
Ultimately, successful integration testing ensures that the software system operates
seamlessly and meets its functional requirements when all components are combined.
Functional testing is a vital phase in software testing that evaluates a software application's
functionality to ensure that it operates according to its specified requirements and design.
This testing approach focuses on verifying that the software's features and functions perform
as intended, meeting user expectations and business needs. During functional testing, testers
create test scenarios and inputs to assess different aspects of the application, such as user
interfaces, APIs, databases, and more. The goal is to validate that the software behaves
correctly in response to various inputs and conditions, identifying deviations from expected
behavior, including defects, inconsistencies, or missing features. Functional testing
encompasses various techniques, including smoke testing, sanity testing, regression testing,
and user acceptance testing, each addressing different aspects of functionality. Successful
functional testing ensures that the software functions reliably, delivers accurate results, and
aligns with its specified requirements, providing confidence in its overall quality and
reliability.
System testing is a crucial phase in the software testing process that evaluates the behavior of
an entire software system as a cohesive unit. It assesses whether the fully integrated software,
consisting of various components and modules, meets its specified requirements and
functions as expected in a real-world environment. Unlike unit testing or integration testing,
which focus on individual components or their interactions, system testing examines the
complete system's performance, functionality, and compliance with design and user
requirements. It encompasses various testing types, such as functional, performance, security,
and usability testing, to ensure that the software operates reliably and without critical issues.
34
System testing aims to uncover defects, inconsistencies, and potential failures that may
emerge when different components interact, providing stakeholders with confidence in the
software's overall quality and readiness for deployment.
White box testing, also known as clear box testing or structural testing, is a software testing
method that examines the internal structure, code, and logic of an application. In white box
testing, the tester has knowledge of the application's source code, algorithms, and design.
This enables them to create test cases based on the software's internal workings, including
decision branches, loops, and data flows. The primary objective of white box testing is to
ensure that all code paths are tested for correctness, identifying logic errors, coding mistakes,
and vulnerabilities. It complements other testing methods like black box testing, which
focuses on external behavior. White box testing is valuable for improving code quality,
security, and overall software reliability.
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software under test is treated, as a black box. you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.
35
• Verify that the file entries are of the correct format
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects. The
task of the integration test is to check that components or software applications, e.g.,
components in a software system or one step up software applications at the company. level –
interact without error.
Test Results: Most of the test cases mentioned above passed successfully. Few defects
encountered.
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: Most of the test cases mentioned above passed successfully. Few defects
encountered.
36
TC-03 Uploading an Select a file for " Salt and " Salt and Pass
image with salt & upload and click Pepper Noise Pepper Noise
pepper noise data. “Detect” Detected" with Detected" with
accuracy. corresponding
accuracy
TC-04 Uploading an Select a file for "Data Loss "Data Loss Pass
image with data upload and click Noise Noise Detected"
loss noise data. “Detect” Detected" with with
accuracy. corresponding
accuracy
TC-05 Uploading an Select a file for The noise data The noise data Pass
image with a upload and click with high with high
combination of “Detect” accuracy is accuracy is
multiple noise displayed with displayed with
types. accuracy corresponding
accuracy
TC-06 Verify response to Upload a file with Invalid File No output Fail
an invalid file an invalid format Format
format upload. (e.g., .txt) and click
“Detect”.
TC-07 Verify response to Upload a normal Not a remote Treating it as a Fail
a normal image image (e.g., .jpg or sensing satellite image
upload (not a .png and click satellite image. and giving
remote sensing “Detect”. predicted result.
satellite image).
TC-08 Verify response to Upload a Predicting if Predicting if the Pass
a high- highresolution the image image contains
resolution image image contains noisy noisy data or not
upload. (>4000x4000 data or not with
pixels) and click with accuracy. corresponding
“Detect”. accuracy.
TC-09 Verify response to Upload an image Predicting if Predicting if the Pass
an excessively file exceeding the the image image contains
large image file maximum size limit contains noisy noisy data or not
upload. (>50 MB) and click data or not with
“Detect”. with accuracy. corresponding
accuracy.
TC-10 Verify response to Upload a grayscale Predicting if Predicting if the Pass
a grayscale image image (black- the image image contains
upload. andwhite) and click contains noisy noisy data or not
“Detect”. (Only data or not with with
applicable for Salt accuracy. corresponding
& Pepper Noise accuracy.
data
Table 7.1: Test cases for system
37
38
CHAPTER 08
CONCLUSION
8 CONCLUSION
The project successfully automated the quality certification process of remote sensing
satellite images through the implementation of a Convolutional Neural Network (CNN)
model utilizing the VGG16 architecture. This endeavour was achieved by creating a Flask-
based web application that enables users to upload remote sensing satellite images and
receive predictions regarding the presence of various types of noise within the images. Key
aspects of the project included the development of the CNN model, which underwent training
with data augmentation techniques to bolster its robustness and performance. Additionally,
the integration of the trained CNN model into the Flask backend facilitated seamless image
processing and prediction handling. A user-friendly web interface was crafted to facilitate
image uploads and display prediction outcomes. Rigorous testing protocols were enforced
throughout the project lifecycle to ensure the system's reliability and accuracy across diverse
scenarios, encompassing valid and invalid file uploads, as well as different types of noise
detection. The project showcased the potential benefits of automating quality certification
processes for remote sensing satellite images, including enhanced efficiency and consistency
in image analysis. Future endeavours may involve expanding the model's capabilities to
detect additional types of noise and integrating advanced image processing techniques to
further refine prediction accuracy. Overall, the project signifies a significant stride in
leveraging machine learning technologies to streamline quality assessment tasks in remote
sensing applications, thus contributing to advancements in satellite imagery analysis and
interpretation.
39
CHAPTER 09
FUTURE ENHANCEMENTS
9 FUTURE ENHANCEMENTS
In considering future enhancements for the project, several avenues can be explored to
augment its capabilities and address emerging needs in the field of automated quality
certification for remote sensing satellite images. Firstly, the implementation of batch
processing functionality would greatly enhance the scalability and efficiency of the system.
Allowing users to upload and process multiple images simultaneously would facilitate the
handling of large datasets, thereby streamlining analysis workflows and accommodating users
with diverse requirements.
Secondly, continuous refinement of the CNN model is essential for improving noise detection
accuracy. This could involve fine-tuning the pre-trained VGG16 model or experimenting with
alternative CNN architectures. Adjusting hyperparameters, exploring different optimization
algorithms, and incorporating transfer learning from related domains are strategies that could
be pursued to enhance the model's performance.
Expanding the scope of noise detection to encompass a broader range of noise types
commonly encountered in remote sensing satellite images is another avenue for enhancement.
By incorporating the ability to detect rare or less prevalent noise artifacts, the system can
provide more comprehensive and nuanced analysis, thereby enhancing its utility in real-world
scenarios.
40
Deploying the web application and CNN model on cloud infrastructure platforms would
provide scalability, reliability, and accessibility to users worldwide. Leveraging cloud-based
resources would enable seamless deployment, maintenance, and scalability, thereby ensuring
optimal performance and availability of the system.
Exploring methods for real-time processing of satellite images as they are captured by remote
sensing satellites would enable timely analysis and response to evolving situations on the
ground. Integrating the system with satellite data feeds and leveraging edge computing
technologies would facilitate real-time analysis, thereby enhancing the system's utility for
applications requiring rapid decision-making.
Finally, introducing collaborative features that enable users to share datasets, collaborate on
analysis tasks, and contribute to a collective knowledge base would foster collaboration
among researchers, analysts, and domain experts in the field of remote sensing. By
facilitating knowledge sharing and collaboration, the system can leverage collective expertise
to tackle complex challenges and drive innovation in the field.
41
CHAPTER 10
REFERENCES
10 REFERENCES
• Algazi, V.R.; Ford, G.E. Radiometric equalization of non-periodic striping in satellite
data. Comput. Graph. Image Process 1981, 16, 287–295.
• Ahern, F.J.; Brown, R.J.; Cihlar, J.; Gauthier, R.; Murphy, J.; Neville, R.A.; Teillet,
P.M. Review article: Radiometric correction of visible and infrared remote sensing
data at the Canada centre for remote sensing. Int. J. Remote Sens. 1987, 8, 1349–
1376.
• Bernstein, R.; Lotspiech, J.B. LANDSAT-4 Radiometric and Geometric Correction
and Image Enhancement Results. 1984; Volume 1984, pp. 108–115. Available online:
https://ntrs.nasa.gov/citations/19840022301 (accessed on 3 September 2021).
• Chen, J.S.; Shao, Y.; Zhu, B.Q. Destriping CMODIS Based on FIR Method. J.
Remote. Sens. 2004, 8, 233–237.
• Xiu, J.H.; Zhai, L.P.; Liu, H. Method of removing striping noise in CCD image.
Dianzi Qijian/J. Electron Devices 2005, 28, 719–721.
• Wang, R.; Zeng, C.; Jiang, W.; Li, P. Terra MODIS band 5th stripe noise detection and
correction using MAP-based algorithm. Hongwai yu Jiguang Gongcheng/Infrared
Laser Eng. 2013, 42, 273–277. Available online: https://ieeexplore.ieee.org/abstract/
document/5964181/ (accessed on 3 September 2021).
• Qu, Y.; Zhang, X.; Wang, Q.; Li, C. Extremely sparse stripe noise removal from
nonremote-sensing images by straight line detection and neighborhood grayscale
weighted replacement. IEEE Access 2018, 6, 76924–76934.
• Sun, Y.-J.; Huang, T.-Z.; Ma, T.-H.; Chen, Y. Remote Sensing Image Stripe Detecting
and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection.
Remote Sens. 2019, 11, 608.
• Wang, Q.; Ma, J.; Yu, S.; Tan, L. Noise detection and image denoising based on
fractional calculus. Chaos Solitons Fractals 2020, 131, 109463.
• Hao, Z. Deep learning review and discussion of its future development. MATEC Web
Conf. 2019, 277, 02035.
42