Aravind19 C

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

A PROJECT REPORT ON

GLAUCOMA CLASSIFICATION USING RETINAL


FUNDUS IMAGES
Mini project submitted in partial fulfillment of the requirements for the
award of the degree of

BACHELOR OF TECHNOLOGY
IN
INFORMATION TECHNOLOGY
(2020-2024)
BY
M. ARAVIND NAIK 20241A12F6
M. RAJ KUMAR 20241A12F0
B. RAJU 20241A12C2

Under the Esteemed guidance


of
G. VIJENDAR REDDY
Associate Professor
Dept of IT.

DEPARTMENT OF INFORMATION TECHNOLOGY


GOKARAJU RANGARAJU INSTITUTE OF ENGINEERING AND
TECHNOLOGY
(AUTONOMOUS) HYDERABAD

i
CERTIFICATE
This is to certify that it is a bonafide record of Mini Project work entitled “GLAUCOMA
CLASSIFICATION USING RETINAL FUNDUS IMAGES” done by
B.Raju(20241A12C2), M.RAJ KUMAR(20241A12F0),M.ARAVIND NAIK(20241A12F6) of
B.Tech (IT) in the Department of Information Technology, Gokaraju Rangaraju Institute of
Engineering and Technology during the period 2020-2024 in the partial fulfillment of the
requirements for the award of degree of BACHELOR OF TECHNOLOGY IN
INFORMATION TECHNOLOGY from GRIET, Hyderabad.

G. Vijendar Reddy Dr.N.V.Ganapathi Raju


Associate professor Head of the Department
(Internal project guide)

(Project External)

ii
ACKNOWLEDGEMENT
We take immense pleasure in expressing gratitude to our Internal guide,
G. Vijendar Reddy, Associate professor, GRIET. We express our sincere thanks for her
encouragement, suggestions and support, which provided the impetus and paved the way for
the successful completion of the project work.
We wish to express our gratitude to Dr. N. V. Ganapathi Raju, HOD of IT Dept.,
our Project Co-ordinators Mr.P.K. Abhilash and B. Kanaka Durga for their constant
support during the project.
We express our sincere thanks to Dr. Jandhyala N Murthy, Director, GRIET, and
Dr. J. Praveen, Principal, GRIET, for providing us the conductive environment for carrying
through our academic schedules and project with ease.
We also take this opportunity to convey our sincere thanks to the teaching and non-
teaching staff of GRIET College, Hyderabad.

Name: M. Raj kumar Name: M. Aravind Naik


Email: rajkumarmasuram018@gmail.com Email: mudavatharavidnaik22gmail.com
Contact No: 8328210178 Contact No: 9392663736

Name: B. Raju
Email: balagudavalaraju27gmail.com
Contact No: 7095310212
iii
DECLARATION

This is to certify that the project entitled “GLAUCOMA CLASSIFICATION USING


RETINAL FUNDUS IMAGES” is a bonafide work done by us in partial fulfillment
of the requirements for the award of the degree BACHELOR OF TECHNOLOGY
IN INFORMATION TECHNOLOGY from Gokaraju Rangaraju Institute of
Engineering and Technology, Hyderabad.

We also declare that this project is a result of our own effort and has not been copied or
imitated from any source. Citations from any websites, books and paper publications
are mentioned in the Bibliography.

This work was not submitted earlier at any other University or Institute for the award of
any degree.

M.ARAVIND NAIK 20241A12F6

M. RAJ KUMAR 20241A12F0

B.RAJU 20241A12C2

iv
TABLE OF CONTENTS
Serial Name Page no
no
Certificates ii
Contents v
Abstract vii
1 INTRODUCTION 1
1.1 Introduction to project 1
1.2 Existing System 2
1.3 Proposed System 4
2 REQUIREMENT ENGINEERING 6
2.1 Hardware Requirements 6
2.2 Software Requirements 6
2.3 Functional Requirements 6
3 LITERATURE SURVEY 7
4 TECHNOLOGY 11
5 DESIGN REQUIREMENT ENGINEERING 24
5.1 UML Diagrams 24
5.2 Use case Diagram 24
5.2 Activity Diagram 25
5.4 Class Diagram 26
5.7 System Architecture 27
6 ITMPLEMENTATION 28
6.1 Modules 28
6.2 Dataset 29
6.3 Sample Code 30
7 SOFTWARE TESTING 43

v
7.1 Unit Testing 45
7.2 Integration Testing 45
7.3 Validation Testing 45
7.4 Performance Evaluation 46
7.5 User Acceptance Testing 46
8 RESULTS 47
9 CONCLUSION AND FUTURE ENHANCEMENT 49
10 BIBLIOGRAPHY 51

vi
LIST OF FIGURES
S No Figure Name Page no
1 Glaucoma Eye 2
2 Manual Glaucoma Examination 3
3 Optic Disc 9
4 Python 11
5 Machine Learning 14
6 Deep Learning 18
7 Logistic Regression 19
8 Use case diagram 24
9 Activity diagram 25
10 Class diagram 26
11 System Architecture 27
12 Sample data from dataset 29
13 Software testing 43
14 Final Result 47
15 Glaucoma 48
16 No Glaucoma 48

vii
ABSTRACT
Glaucoma, a neuro-degenerative eye disease, is the result of an increase in intraocular pressure
inside the retina. It is the second-leading cause of blindness worldwide, and if an early diagnosis
is not made, it can lead to total blindness. There is a critical need to develop a system that can
work well without a lot of equipment, qualified medical professionals, and requires less time
with regard to this core issue. This article provides a thorough examination of the main machine
learning (ML) techniques employed in the processing of retinal images for the identification and
diagnosis of glaucoma. Machine learning (ML) has been demonstrated to be a crucial technique
for the development of computer-assisted technology. Machine learning (ML) techniques can
be used to construct predictive models for the early diagnosis of glaucoma. Our objective is to
develop a machine learning algorithm that can accurately forecast the likelihood of developing
glaucoma using patient data. Ophthalmologists have also conducted a significant amount of
secondary research over the years. Such characteristics emphasise the importance of ML while
analysing retinal pictures.
Domain(s): Machine learning

viii
1. INTRODUCTION
1.1 Introduction to the project
Glaucoma, a progressive eye disease, is one of the leading causes of irreversible blindness
worldwide. It is characterized by the damage to the optic nerve, often associated with elevated
intraocular pressure (IOP), and can result in a gradual loss of peripheral vision and, eventually,
complete blindness if left untreated. Early detection and intervention are crucial for preventing
or delaying vision loss and preserving the quality of life for individuals at risk.
The Glaucoma Prediction Project aims to develop a predictive model that can assist in the early
detection and prediction of glaucoma. By leveraging machine learning techniques and analyzing
various patient-related factors, medical history, and diagnostic test results, the project aims to
identify individuals who are at higher risk of developing glaucoma. This knowledge can enable
healthcare professionals to intervene proactively, initiate timely treatment, and mitigate the
progression of the disease.
The primary objective of the Glaucoma Prediction Project is to build an accurate and reliable
predictive model that can effectively identify individuals at risk of developing glaucoma. By
combining demographic information, such as age, gender, ethnicity, and geographical location,
with patient-specific data, including medical history, family history of glaucoma, and ocular
health, the model aims to provide a comprehensive assessment of glaucoma risk. Additionally,
the model will utilize diagnostic test results, such as tonometry (IOP measurement), optic nerve
evaluation, and visual field tests, to enhance the prediction accuracy.
The early identification of individuals at a high risk of developing glaucoma holds immense
potential for improving patient outcomes. By implementing appropriate preventive measures,
such as lifestyle modifications, regular eye examinations, and timely treatment interventions,
healthcare professionals can effectively manage the disease, minimize its impact on vision, and
enhance the quality of life for affected individuals.
The Glaucoma Prediction Project will follow a comprehensive approach that encompasses data
collection, preprocessing, model development, evaluation, and deployment. By leveraging
advanced machine learning algorithms and data-driven insights, the project seeks to contribute
to the field of ophthalmology by providing a valuable tool for glaucoma risk assessment and
early intervention. Through the successful implementation of the Glaucoma Prediction Project,
healthcare professionals can be equipped with an accurate and reliable predictive model that

1
complements their clinical judgment. This project has the potential to significantly impact public
health by enabling proactive glaucoma management, reducing the burden of irreversible vision
loss, and improving the overall well-being of individuals at risk of glaucoma.By working
collaboratively with healthcare experts, leveraging cutting-edge technologies, and employing
rigorous evaluation methodologies, the Glaucoma Prediction Project aims to make significant
strides in the early detection and prediction of glaucoma, ultimately leading to improved patient
outcomes and a brighter future for those affected by this sight-threatening condition.

Figure 1. Glaucoma eye


1.2 Existing Systems
The existing system for glaucoma prediction using fundus images involves manual
interpretation and analysis of retinal fundus images by ophthalmologists. It typically follows
these steps:
Data Collection: Retinal fundus images are collected from patients using fundus cameras or
other imaging devices. These images capture the structure of the retina, including the optic nerve
head and blood vessels.

2
Preprocessing: The collected fundus images are preprocessed to enhance the quality and extract
relevant features. Preprocessing techniques may include image resizing, denoising, contrast
enhancement, and normalization.
Manual Feature Extraction: Ophthalmologists manually analyze the fundus images and
extract specific features related to glaucoma, such as the cup-to-disc ratio, optic disc size, retinal
nerve fiber layer thickness, and presence of abnormalities or lesions.
Classification: Based on the extracted features, ophthalmologists classify the fundus images
into different categories, such as "Glaucoma" and "No Glaucoma." This classification is based
on their expertise and knowledge of glaucoma characteristics.
Diagnosis and Treatment: After classification, ophthalmologists make a diagnosis and
recommend appropriate treatment options based on the presence or absence of glaucoma in the
fundus images.
Disadvantages of Existing System:
Subjectivity: Manual interpretation and feature extraction can introduce subjectivity and
variations among different ophthalmologists, leading to inconsistent results.
Time-Consuming: The manual process is time-consuming, especially when analyzing a large
number of fundus images, which can lead to delays in diagnosis and treatment.
Expertise Dependency: Accurate classification and diagnosis require expert knowledge and
experience, making it less accessible in areas with a shortage of ophthalmologists.

Figure 2. Manual Glaucoma Examination

3
1.3 Proposed System
The proposed system aims to overcome the limitations of the existing system by introducing an
automated and scalable approach for glaucoma prediction using fundus images with logistic
regression. Here are the key components of the proposed system:
Data Collection: Retinal fundus images are collected using fundus cameras or imaging devices,
similar to the existing system.
Preprocessing: The collected fundus images undergo preprocessing techniques to enhance
image quality and remove any noise or artifacts. This step may include resizing, denoising,
normalization, and contrast enhancement.
Feature Extraction: Automated feature extraction techniques are employed to extract relevant
features from the fundus images. These techniques can include image processing algorithms,
computer vision techniques, and deep learning-based feature extraction methods. The goal is to
capture important characteristics of the retina, optic disc, blood vessels, and other structures
associated with glaucoma.
Logistic Regression Model: A logistic regression model is trained using the extracted features
and corresponding labels (glaucoma or no glaucoma) from a labeled dataset. Logistic regression
is a supervised learning algorithm that models the relationship between the input features and
the binary target variable (glaucoma or no glaucoma). The model learns the patterns and
associations in the data to make predictions.
Model Evaluation and Validation: The trained logistic regression model is evaluated using
performance metrics such as accuracy, precision, recall, and F1 score. Cross-validation
techniques may be employed to ensure the robustness of the model and minimize overfitting.
Deployment and Testing: The trained logistic regression model is deployed in a production
environment or integrated into a software application. It can accept new fundus images as input
and provide predictions on whether the images indicate the presence of glaucoma.

Advantages of Proposed System:


Automation: The proposed system automates the process of glaucoma prediction, reducing the
dependence on manual interpretation and analysis.
Objectivity: By employing standardized feature extraction and logistic regression modeling,

4
the system aims to reduce subjectivity and variations in predictions.
Speed and Efficiency: Automated feature extraction and logistic regression analysis enable
faster processing of fundus images, leading to quicker diagnosis and treatment decisions.
Scalability: The proposed system can handle a large volume of fundus images efficiently,
making it scalable to accommodate increasing patient data.
Accessibility: With the automation of glaucoma prediction, the proposed system can extend
access to reliable diagnostics in areas with limited ophthalmologist resources.

5
2. REQUIREMENT ENGINEERING
2.1 Software Prerequisites
• operating system: - windows 10
• technology: python
• software: anaconda or python idle 3.7 version

2.2 Hardware Conditions


• operating system: windows, linux
• processor: minimum intel i3
• ram: minimum 4 gb
• hard disk: minimum 250gb

2.3 Functional Preconditions


• data collection
• data preprocessing
• training and testing
• modeling
• predicting

6
3.LITERATURE SURVEY

Advanced techniques for imaging which includes scanning laser polarimetry (SLP) and optical
coherence tomography (OCT) are commonly utilised in the diagnosis of glaucoma and DR.
These techniques demand specialist knowledge and are pricey. When glaucoma and DR are
detected, factors like the cup to disc ratio and the ratio of the distance between the centre of the
optic disc and the optic nerve are frequently employed in determining the damage to the optic
nerve.
The optic disc was initially separated from the fundus picture in order to conduct the trials and
create the data sets for categorising glaucoma. For the preprocessing techniques, the high-
quality fundus (HRF) picture database was utilised. Two of the main causes of vision loss
worldwide are diabetic retinopathy (DR), a common eye condition that affects a blood artery in
the retina, and glaucoma. While DR is a consequence of diabetes brought on by high blood
sugar levels harming the back of the eye, glaucoma is a common eye ailment in which the optic
nerve that links the eye to the brain develops sustaining injuriesA staggering amount of retinal
pictures must be analysed in order to obtain a precise and prompt diagnosis.
The use of completely parallel field programmable gate arrays (FPGAs) is suggested and shown
in this study as a means of easing the strain of real-time computing on conventional software
architectures. On an FPGA device, the experimental outcomes that were obtained by software
implementation were verified.

An ophthalmologist typically uses fundus images from indirect ophthalmoscopy with a


traditional retina photo camera or slit lamp to manually assess the structures of the optic nerve
and retinal nerve fibre layer (RNFL) in order to make the conventional basic diagnosis of
glaucoma. In high-income nations, optical coherence tomography (OCT) of the optic nerve and
RNFL is frequently incorporated. The assessments of these studies are displayed as graphs,
allowing for comparisons with age-matched normative data.

The transfer learning-capable CNN models used in this study's DL techniques had previously
been trained on the ImageNet dataset. The seven CNN models chosen for this investigation are

7
listed in Table 1. The classifiers were chosen since the Keras library's pattern identification
models for digital photos are some of the most extensively used patterns recognition models.

In order to identify glaucoma utilising stereo images of the optic nerve head, a deep ensemble
network with a mechanism for attention was set up. It is composed of a convolutional neural
network and an attention-guided network. A collection of stereo glaucoma pictures was given
to the authors by the Tan Tock Seng Hospital in Singapore.

The generator as well as the discriminator networks of the optic disc and cup-based cGAN
network were divided by the authors using a U-Net architecture. All convolutional layers of the
proposed U-Net have less filters, and when the resolution is decreased, fewer filters are not
included. This model made use of the DRIONS-DB, DRISHTI-GS, and RIM-One-r3 databases.
The pictures were then pre-processed with a contrast-limited dynamic histogram equalisation
and bounding boxes in the area of interest (ROI). The cup segmentation and the optic disc
segmentation yielded successful results on both databases. The generator translated
observational input characteristics (retinal backdrop colour) to the generated output (binary
mask). The discriminator utilises a loss function to train the algorithm towards precise picture
discrimination.

Using 2787 retinal pictures from the five open datasets REFUGE, ACRIMA, ORIGA, RIM-
ONE, and DRISTI-GS1. In the REFUGE dataset, 1200 retinal images taken by either a Seiss
Viscucam or a Canon CR-2 on Chinese patients were gathered.

The two phases of our strategy are OD segmentation and glaucoma classification.

There are two processes in automated glaucoma detection using deep learning. In the first stage,
DeepLabv3+ identified and retrieved the OD from the entire image. In the second stage, the
segmented OD area was subjected to three deep CNN algorithms to differentiate between
glaucoma and regular vision.

1. DeepLabv3+ Semantic Segmentation for OD Segmentation

2. Using Deep CNNs to Classify normal and glaucoma retina images

8
To assist ophthalmologists in more promptly and economically diagnosing glaucoma illness,
this study provides an automated primary glaucoma testing based on quantitative evaluation of
fundus images. The recommended approach consists of two main processing stages. Five
alternative deep semantic algorithms have been utilised for experimentation with OD
segmentation. The traits recovered from the clipped OD area are then used as a source of
information for training a classifier that will be able to identify the presence of glaucoma in the
experimental photos.Optic discs are found using the multiresolution-based standardised cross-
correlation method. The detecting point is used to initialise the active contour. We provide
confirmation on databases like Drishti-GS, MESSIDOR, RIGA, and a local database that,
respectively, contain 101, 1200, 750, and 942 retinal fundus pictures and 2993 retinal fundus
images.

Figure 3. Optic Disc

9
The multiresolution-based standardised cross-correlation approach is used to identify otic discs.
The active contour is initialised using the detecting point. Datasets like Drishti-GS, MESSIDOR,
and RIGA (Retinal fundus images for glaucoma analysis: The RIGA dataset) are used in the
validations.There are 101 photos in the Drishti-GS collection that were taken from people who
identify as being of Indian origin. Each image has a resolution of 2896 x 1944 pixels, and it is
accompanied by a local database that holds 101, 1200, 750, 942, and 2993 retinal fundus images,
respectively. With 1200 photos in the 23041536, 22401488, and 1440960 pixel resolutions, the
MESSIDOR database is one of the most well-known fundus imaging databases. RIGA is a
removed from identification database made up of 460, 195, and 95 photos from three separate
sources: MESSIDOR, Bin Rushed Eye Centre, and Magrabi Eye Centre. Six qualified
ophthalmologists hand-annotated the photographs. The database includes annotations for the
optical cup and OD.

10
4. TECHNOLOGY
4.1 ABOUT PYTHON
In the domain of glaucoma prediction, Python offers a range of technologies and techniques that
can be employed to develop effective models. Machine learning algorithms serve as powerful
tools for analyzing pertinent features extracted from eye images or patient data, enabling the
prediction of glaucoma likelihood. To categorise and detect glaucoma patients, techniques
including logistical regression, support vector machines (SVM), random forests, as well as deep
learning models like convolutional neural networks, also known as CNNs, might be used. These
models can be built and trained using Python frameworks such as TensorFlow or PyTorch,
allowing for the integration of cutting-edge deep learning techniques into the prediction process.
To process and analyze the eye images, Python libraries like OpenCV and scikit-image come in
handy. These libraries offer a wide range of functionalities for tasks such as image segmentation,
feature extraction, and visualization. By leveraging these tools, important features like optic disc
size, cup-to-disc ratio, blood vessel characteristics, or other relevant parameters can be extracted
from the retinal images, facilitating the accurate prediction of glaucoma.
Before inputting the data into the predictive models, Python libraries like pandas and NumPy
provide comprehensive capabilities for data preprocessing, handling missing values,
normalization, and feature engineering. These steps are crucial for cleaning and organizing the
data, ensuring that the models receive accurate and properly formatted inputs.
Additionally, data visualization plays a significant role in understanding the dataset and
uncovering patterns. Python libraries such as Matplotlib and Seaborn enable the generation of
insightful plots, charts, and graphs, aiding researchers and developers in comprehending the data
and identifying potential relationships or trends that could be indicative of glaucoma.

Figure 4. Python

11
4.2 APPLICATIONS OF PYTHON
It provides several benefits to be able to use Python as a general-purpose language for a range
of tasks provides several benefits. Just a handful of the sectors where Python is most frequently
used are listed below:
➢ Data science
➢ Web Development
➢ Mathematical and scientific computing
➢ Mapping and geography (GIS software)
➢ Basic game development
➢ Computer graphics

4.3 PYTHON IS FREQUENTLY USED IN DATA SCIENCE


Every time, Python's ecosystem grows and its statistical analysis capabilities improve.
Regarding OD data processing, it finds the ideal equilibrium between complexity and size.
Python puts readability and effectiveness first. Python is a programming language for computers
that is used by programmers who wish to undertake data analysis or use statistical approaches
as well as developers that use data science.
For applications like visualisation of information, artificial intelligence (AI), natural language
processing, difficult data analysis, and more, Python offers a wide range of scientific tools.
Python is an excellent tool for scientific computation and a decent alternative to pricey
applications like MatLab because of all of these features. The most popular data science libraries
and tools are the ones listed below:
4.3.1 PANDAS
The term "panel data" used in economics to describe multidimensional structured data sets is
where the word "Pandas" originates. It is a library for manipulating and analysing data. The
library includes data structures and procedures that can be used to manage numerical tables and
time series. Another name for it is "Python Data Analysis Library".
4.3.2 NUMPY
NumPy is a Python package used to manipulate arrays. Along with it are working tools and a
high-performance multipurpose array object. Massive, multi-dimensional arrays as well as

12
matrices are supported by this fundamental Python module, which also includes an enormous
collection of advanced mathematical functions that may be applied to these arrays.
4.3.3 MATPLOTLIB
High-quality visuals are produced for many interactive and print formats using the Python 2D
plotting tool Matplotlib. You can make plots, histograms, power spectra, bar charts, error charts,
scatterplots, and more using Matplotlib.
4.3.4 SCIKIT-LEARN
Scikit-learn is a popular machine learning library for Python. It provides a wide range of tools
and algorithms for tasks such as classification, regression, clustering, and dimensionality
reduction. With its easy-to-use interface and efficient implementation, scikit-learn simplifies the
process of building and evaluating machine learning models. It also offers utilities for data
preprocessing, feature extraction, and model selection. Whether you're a beginner or an
experienced practitioner, scikit-learn is a valuable resource for implementing various machine
learning techniques in Python.
4.3.5 Seaborn
Seaborn is a popular Python data visualization library that is built on top of Matplotlib. It
provides a high-level interface for creating attractive and informative statistical graphics. With
its easy-to-use functions and aesthetically pleasing visualizations, Seaborn is a valuable tool for
data exploration and presentation.
4.3.6 OS
The os module in Python provides a way to interact with the operating system, allowing you to
perform various tasks related to file and directory manipulation, process management, and
environment variables. It serves as a bridge between your Python code and the underlying
operating system functionalities.
4.3.7 CV2
In Python, the cv2 library is the binding for OpenCV (Open-Source Computer Vision Library).
It provides Python access to the OpenCV functions and modules, allowing you to utilize the
features and functionalities of OpenCV in your Python programs.

13
4.3.8 CSV
The csv module in Python provides functionality for working with CSV (Comma-Separated
Values) files. CSV files are a common way to store tabular data, where each row represents a
record and the values within each row are separated by commas. The csv module offers functions
that allow you to read data from CSV files, write data to CSV files, and manipulate the data
within them. It supports custom delimiters and formatting options, making it flexible for
different CSV file formats. Additionally, the module provides options for handling headers,
iterating over rows, and performing various operations on the data. With its simplicity and ease
of use, the csv module is a valuable tool for working with CSV files in Python, especially in
data analysis, data manipulation, and data import/export tasks.
4.4 Machine Learning

Figure 5. Machine Learning


Need for Machine Learning
Machine learning refers to the field of study and practice that focuses on developing algorithms
and models that allow computers to learn and make predictions or decisions without being
explicitly programmed. It is a subset of artificial intelligence that enables systems to
automatically analyze and interpret data, extract meaningful patterns, and use them to make
accurate predictions or take actions.

14
The core concept of machine learning revolves around training models on historical data to learn
patterns and relationships. These models then generalize their learning to make predictions or
decisions on new, unseen data. The training process involves feeding the model with labeled
examples, allowing it to identify underlying patterns and correlations.
Machine learning is used in various domains and applications, including image and speech
recognition, natural language processing, recommendation systems, fraud detection,
autonomous vehicles, medical diagnoses, and financial market predictions. Its versatility and
ability to handle complex and large-scale data make it a powerful tool for solving real-world
problems.
Machine learning is necessary for several reasons:

including images, text, videos, and sensor readings. Machine learning algorithms can
effectively process and extract valuable insights from such complex and unstructured data,
enabling us to make informed decisions and derive meaningful patterns.

2. Automation and Efficiency: Machine learning automates repetitive and time-consuming


tasks that would otherwise require manual effort. By learning from data, machine learning
models can perform complex calculations, analyze patterns, and make predictions, resulting in
increased efficiency and productivity.

3. Decision Making and Prediction: Machine learning enables systems to make data-driven
decisions and accurate predictions based on patterns observed in the data. This is particularly
valuable in domains where precise predictions are crucial, such as finance, healthcare,
marketing, and logistics.

4. Personalization and Recommendation Systems: Machine learning plays a vital role in


personalized experiences and recommendation systems. By analyzing user behavior and
preferences, machine learning models can provide tailored recommendations, customized
advertisements, and personalized user experiences, leading to higher customer satisfaction and
engagement.

5. Pattern Recognition and Anomaly Detection: Machine learning excels at identifying


15
complex patterns and detecting anomalies in data. It is widely used in tasks such as fraud
detection, image recognition, natural language processing, and cybersecurity, enabling quick
and accurate identification of irregularities and potential threats.

6. Continuous Learning and Adaptability: Machine learning models can learn and improve
their performance over time as they are exposed to new data. This adaptability allows them to
handle evolving situations, adjust to changes in patterns, and maintain relevance in dynamic
environments.

Types of Machine Learning

There are several types of machine learning, each with its own characteristics and applications.
The main types of machine learning are:

1. Supervised Learning: In supervised learning, the algorithm is trained on labeled data, where
each example is associated with a known target or output. The algorithm learns to map inputs to
outputs, allowing it to make predictions or classify new, unseen data. Supervised learning is
widely used in tasks such as regression (predicting continuous values) and classification
(predicting discrete labels).

2. Unsupervised Learning: Unsupervised learning deals with unlabeled data, where the
algorithm aims to find patterns, structures, or relationships within the data without any
predefined output. This type of learning is used for tasks like clustering (grouping similar data
points) and dimensionality reduction (reducing the number of features while preserving
important information).

3. Semi-Supervised Learning: Semi-supervised learning is a combination of supervised and


unsupervised learning. It uses a small amount of labeled data and a large amount of unlabeled
data for training. This approach is useful when obtaining labeled data is expensive or time-
consuming.

4. Reinforcement Learning: Reinforcement learning involves training an agent to make


decisions in an interactive environment. The agent learns through trial and error by receiving
feedback or rewards based on its actions. This type of learning is commonly used in robotics,
game playing, and autonomous systems.

16
5. Deep Learning: Deep learning is a subset of machine learning that focuses on neural networks
with multiple layers. These networks can automatically learn hierarchical representations of
data, enabling them to extract intricate features and patterns. Deep learning has achieved
remarkable success in tasks such as image recognition, natural language processing, and speech
recognition.

Each type of machine learning has its advantages and disadvantages, and the choice of approach
depends on the problem at hand, the availability of labeled data, and the desired outcome. It is
common to use a combination of these approaches in real-world applications, depending on the
complexity of the task and the nature of the available data.

Advantages of Machine Learning:

1. Handling Complex Data: Machine learning algorithms can effectively handle and process
large-scale, complex datasets that are difficult for traditional methods to manage. They can
uncover patterns, trends, and relationships within the data, leading to valuable insights and
informed decision-making.

2. Automation and Efficiency: Machine learning automates repetitive tasks, reducing the need
for manual intervention. This improves efficiency and productivity, allowing humans to focus
on more critical and creative aspects of problem-solving.

3. Accurate Predictions and Decision-Making: Machine learning models can make accurate
predictions and decisions based on patterns learned from historical data. This enables businesses
to make data-driven decisions, optimize processes, and improve outcomes.

4. Personalization and Recommendation Systems: Machine learning powers personalized


experiences and recommendation systems. By analyzing user behavior and preferences,
machine learning algorithms can deliver customized content, products, and services, enhancing
customer satisfaction and engagement.

5. Scalability and Adaptability: Machine learning algorithms can scale to handle large datasets
and adapt to new data and evolving environments. They can continuously learn and improve
their performance, ensuring relevancy and effectiveness over time.

17
Disadvantages of Machine Learning:

1. Data Dependency: Machine learning models heavily rely on quality and representative
training data. If the data is biased, incomplete, or of poor quality, it can lead to biased or
inaccurate predictions. Data collection and preprocessing can be time-consuming and resource-
intensive.

2. Interpretability and Explainability: Some machine learning models, especially deep learning
models, can be difficult to interpret and explain. They operate as complex black boxes, making
it challenging to understand the reasoning behind their predictions and decisions.

3. Overfitting and Generalization: Machine learning models may sometimes overfit the training
data, meaning they become too specialized in capturing the specific examples and fail to
generalize well on new, unseen data. Balancing model complexity and avoiding overfitting is a
critical challenge in machine learning.

4. Computing Resources: Training complex machine learning models can require significant
computational resources, including processing power and memory. This can limit the
accessibility of machine learning to those with access to adequate computing infrastructure.

5. Ethical Considerations: Machine learning models can inadvertently reinforce biases present
in the training data, leading to biased outcomes or discriminatory decisions. Careful attention
must be given to ethical considerations, fairness, and accountability in the design and
deployment of machine learning systems.

18
.

Figure 6. Deep Learning

4.5 ALGORITHMS:

4.5.1 Logistic Regression:

A popular machine learning approach for binary classification applications, such as glaucoma
prediction using fundus pictures, is logistic regression. Images of the fundus give a view of the
retina, optic nerve, and blood vessels in the rear of the eye. We can identify characteristics from
these photos that assist us categorise whether a patient has glaucoma or not.

Making use of the training set, develop a logistic regression model. The retrieved features'
associations with the glaucoma labels that correlate to them are taught to the model.

Once trained and optimised, the logistic regression model can be used to predict outcomes for
brand-new, unviewed fundus images. For each input image, the model will produce a probability
indicating the likelihood of glaucoma present.

19
Figure 7. Logistic Regression
Machine Learning Methods for Predicting Glaucoma
Fundus images may be analysed and glaucoma risk can be predicted with the help of strong
machine learning algorithms.
Glaucoma can cause irreparable vision loss if it is not identified and treated at an early stage.
The analysis of fundus pictures using machine learning algorithms offers a viable method for
glaucoma early diagnosis and prediction. a number of well-known algorithms are used to
forecast the development of glaucoma, including Support Vector Machines (SVM), Random
Forest, Naive Bayes (Gaussian), Logistic Regression, Naive Bayes (Multinomial), and Decision
Tree.
Support Vector Machines (SVM)
Support Vector Machines (SVM) make up a powerful and well-liked supervised learning
technology with applications in classification and regression. SVM can accurately classify
retinal fundus images as indicative of glaucoma or healthy when used for glaucoma prediction.
SVM has been widely applied in various domains, including image classification, text
classification, bioinformatics, and finance. Its versatility, robustness, and ability to handle high-
dimensional data make it a powerful tool in the field of machine learning.

20
Finding an ideal hyperplane that maximises the distance between two classes in the feature space
is the basic goal of SVM. The hyperplane divides the two classes by serving as a decision
boundary. SVM seeks to identify the most reliable and discriminative decision boundary by
maximising the margin. A dataset of labelled fundus images can be used to train SVM for the
classification of glaucoma. Each image has a class designation that designates whether it depicts
a healthy eye or glaucoma symptoms. By spotting glaucoma-specific patterns and features in
the photos, the SVM algorithm learns to differentiate between the two classes.
The effectiveness with which SVM can manage high-dimensional feature spaces is one of its
benefits. Retinal fundus images have a high-dimensional feature space because they often
contain a large number of features or pixels. SVM can handle this complexity and determining
the best hyperplane to divide the classes. Additionally, SVM has strong generalisation
performance, which enables it to correctly categorise test or unknown data. Due to SVM's focus
on margin maximisation, the model is less sensitive to minute changes in the input data.
Random Forest
For classification issues, random f orest is a powerful algorithm that is commonly employed,
such as glaucoma prediction based on fundus images. To provide a final forecast, Random
Forest combines the predictions of various decision trees. A dataset of fundus images with
appropriate labelling is utilised for training Random Forest to predict glaucoma. Each image in
the dataset has a label that identifies whether it depicts a healthy eye or glaucoma symptoms.
After that, this dataset is used to train the Random Forest algorithm to discover the patterns and
traits that are characteristic of glaucoma. Many different decision trees are individually
constructed in a Random Forest. Each decision tree employs a subset of features employing a
random portion of the initial training data, and is trained. This randomization lowers the
possibility of overfitting and contributes to the diversity of the trees. Each decision tree in the
Random Forest independently predicts the existence of glaucoma for a specific fundus image
during the prediction phase using the features it has learned. By combining all the individual
trees' forecasts, such as by taking a majority vote or average the probability, the final prediction
is obtained. This ensemble method combines the advantages of various decision trees, producing
a stronger and more precise forecast.
Naive Bayes (Gaussian)
A common probabilistic approach for classification problems, such as glaucoma prediction

21
using fundus pictures, is naive Bayes (Gaussian). It makes the assumption that the features have
a Gaussian (normal) distribution and are independent of one another. Relevant features are
chosen from the fundus images in order to employ Naive Bayes (Gaussian) for glaucoma
prediction. Various qualities taken from the photos, such as pixel intensity, texture, or shape
aspects, could be included in these features. Next, a dataset of fundus images with labels
indicating the presence or absence of glaucoma is used to train the Naive Bayes (Gaussian)
model. Given the observed feature values, the Naive Bayes (Gaussian) method calculates the
conditional probability of glaucoma. This is accomplished by computing the chance that the
observed feature values belong to either class (glaucoma or non-glaucoma), presuming that each
feature has a Gaussian distribution. After computing the posterior probability of glaucoma given
the observed features using Bayes' theorem, the model chooses the group with the highest
likelihood was chosen as a final forecast.Naive Bayes has the capacity to handle huge datasets
effectively as one of its advantages. The algorithm is appropriate for processing massive
volumes of data since it is computationally efficient and needs only a tiny amount of memory.
Additionally, continuous data, like the numerical features retrieved from fundus images, lends
itself very well to Naive Bayes.
Naive Bayes (Multinomial)
Another variation of the Naive Bayes algorithm that is frequently employed for classification
problems, such as glaucoma prediction, is Naive Bayes (Multinomial). When working with
discrete feature data, such as histograms or frequency-based image representations, it is
extremely helpful. The pertinent discrete features are chosen from the data to be used with Naive
Bayes (Multinomial) for glaucoma prediction. These features might be frequency-based
representations, histograms of pixel intensities, or other discretized image properties in the
context of fundus images. Next, a labelled dataset is used to train the Naive Bayes (Multinomial)
model, where each data instance is linked to the presence or absence of glaucoma. In order to
calculate the class probabilities, the Naive Bayes (Multinomial) algorithm estimates the
likelihood of witnessing the features in each class using their frequencies. It assumes that the
characteristics have a multinomial distribution, i.e., that they are discrete counts or frequencies,
and that they follow this distribution. The method creates the final prediction based on these
probabilities and computes the posterior probability of glaucoma given the observed feature
values. Naive Bayes (Multinomial) has the benefit of being computationally efficient. Large

22
datasets with high-dimensional discrete features can be handled by the technique, which is
effective. As text data is frequently represented as discrete feature vectors in natural language
processing applications, this makes it a good fit for those applications.
Decision Tree
Glaucoma can be predicted using the machine learning method Decision Tree. It produces a
tree-like model where each leaf node denotes a class (glaucoma or healthy), each internal node
denotes a feature, and each branch denotes a choice based on that feature. The system is trained
on a dataset of fundus images with labels indicating the presence or absence of glaucoma in
order to predict glaucoma using a decision tree. The Decision Tree method looks for the best
features and their thresholds during training so that the data can be split in the most efficient
way according to the class labels. For each split, this procedure is repeated recursively to
produce a tree structure that symbolises the decision-making process. Once trained, the Decision
Tree can be used to predict outcomes for brand-new fundus images. As the image moves through
the tree, decisions are made depending on the associated feature at each internal node. The leaf
node that reflects the predicted class (glaucoma or healthy) for the input image is reached at the
end of the tree's route.Finding patterns and links in the data is one advantage of decision trees.
They are able to recognise crucial details and grasp intricate relationships between them, which
enables them to make precise forecasts. Decision trees are appropriate for a variety of image
feature types since they can handle both category and numerical data.
Using it in a Mini Project
Consider a mini-project that includes the following stages to show how these machine learning
algorithms can be used to forecast the development of glaucoma.
Obtain a dataset of fundus photographs and the glaucoma symptoms they represent.
Image preprocessing: Resize, crop, and normalise the fundus images as part of the
preprocessing procedure.
Feature Extraction: Using methods like image segmentation, texture analysis, or optic disc
recognition, extract pertinent characteristics from the preprocessed images.
Data Split: To assess the effectiveness of the algorithms, divide the dataset into training and
testing sets.
Model Training: Using the training set, train each of the algorithms (SVM, Random Forest,

23
Naive Bayes, Logistic Regression, and Decision Tree).
Model Evaluation: The performance of each algorithm should be evaluated using the relevant
metrics, such as precision, specificity, sensitivity, and the region of the receiver operating
characteristic curve (AUC-ROC).
Model comparison: Based on the outcomes of the evaluation, compare the effectiveness of the
algorithms and choose the one that is most appropriate for glaucoma prediction.
Utilise the selected model to forecast the likelihood of glaucoma using fresh, previously unseen
fundus images

24
5. DESIGN REQUIREMENT ENGINEERING

UML DIAGRAMS
5.1 Use case Diagram:
A use case diagram is a particular kind of behavioural diagram that is described by and produced
from a use-case study in the Unified Modelling Language (UML). Its purpose is to provide a
graphical depiction of a system's functionality in terms of actors, goals (represented as use
cases), and any relationships among those cases. A use case diagram's main objective is to show
which actor performs which system operations. It is possible to depict the roles of the system's
actors.Use Cases:

Figure 8. Use Case Diagram

25
5.2 Activity diagram :
The "Start Activity" node initiates the process.
The following step involves utilising a fundus camera or another appropriate tool to take a
picture of the fundus.The image must be preprocessed once it is taken in order to improve its
quality and get rid of noise or artefacts.Then, using the preprocessed fundus image, pertinent
features are extracted, including the optic disc, cup-to-disc ratio, blood vessels, etc.
A machine learning model is trained using extracted features. Using labelled data, where the
presence or absence of glaucoma is known, the model is trained.
The model can be used to forecast the likelihood of glaucoma in a particular fundus picture once
it has been trained. The final result is then displayed, which is the projected glaucoma likelihood.
The action comes to an end.

Figure 9. Activity Diagram

26
5.3 Class Diagram:
Class diagrams are a type of static diagram. It represents a static image of an application. A class
diagram may be used to visualise, describe, and document many different system components
as well as produce executable source code for a software project.
A class diagram shows a class's characteristics and actions as well as the restrictions imposed
on the system. Class diagrams are often used in the design of object-oriented systems since they
are the only UML diagrams that can be translated directly to object-oriented languages. A
collection of classes, interfaces, affiliations, collaborations, and limitations are shown in a class
diagram. An alternative term for it is a structural diagram.
It is essential for producing the deployment and component diagrams. It assists in creating
executable code that can be used for both forward and backward engineering on any system.
Classes like Importing libraries, Data exploration, Image processing, Features extraction, Data
splitting, etc. are all represented in our class diagram.

Figure 10. Class Diagram

27
5.4 Architecture
To ensure their system or application fits the demands of their users, system designers and
developers can use a basic architectural diagram (UML) to show the high-level structure of their
system or application. The patterns that show up in the design can also be described using this
phrase. It illustrates a technique people employ to establish constraints, connections, and
boundaries between components while abstracting the overall architecture of a software system.
It offers a thorough overview of the physical deployment and evolution strategy for the software
system. The developers and designers can benefit much from this diagram.

Figure 11. System Architecture

28
6. IMPLEMENTATION

6.1 Modules
Modules used:
Pandas
The term "panel data" used in economics to describe multidimensional structured data sets is
where the word "Pandas" originates. It is a library for manipulating and analyzing data. The
library includes data structures and procedures that can be used to manage numerical tables and
time series. Another name for it is "Python Data Analysis Library".
Numpy
A high-performance multidimensional array object and related capabilities are included in the
Python package NumPy, which specialises in array processing. With the help of this important
Python package, big, multi-dimensional arrays and matrices may be handled easily.
Matplotlib
A Python toolkit for 2D charting called Matplotlib makes it possible to create high-quality
graphics that are appropriate for a range of interactive and print formats. Users are able to easily
generate a wide variety of visualisations with Matplotlib, including plots, histograms, power
spectra, bar charts, error charts, scatterplots, and a variety of other graphical representations.
SCIKIT-LEARN
Without a doubt, Scikit-learn is the most useful machine learning library for Python. This
complete toolkit, which includes classification, regression, clustering, and dimensionality
reduction, offers a wide range of useful features for statistical modelling and machine learning.
For classification, regression, and clustering applications, it integrates well-known methods
including support vector machines, random forests, gradient boosting, k-means, and DBSCAN.
Additionally, Scikit-learn is created to work in perfect harmony with other crucial Python
libraries like SciPy and NumPy, which offer powerful numerical and scientific capabilities.
Seaborn
A Python data visualisation framework called Seaborn is built on the matplotlib platform and
has a good working relationship with pandas data structures. Utilising visualization's potent
powers, Seaborn greatly improves data exploration and comprehension.

29
OS
Users can perform a wide range of operations pertaining to file and directory management,
process management, and several other system-related tasks by using the 'os' library in the
Python programming language.
CV2
In Python, the cv2 library is the binding for OpenCV (Open-Source Computer Vision Library).
It provides Python access to the OpenCV functions and modules, allowing you to utilize the
features and functionalities of OpenCV in your Python programs.
6.2 DATASET
● Glaucoma Dataset consists of two classifications. For glaucoma, 455 pictures in total are
used.
● Class 0 for those who are glaucoma-free (45 photos), class 1 for those who have
glaucoma.

Figure 12. Sample data from dataset

30
6.3 Sample code

31
32
33
34
35
36
37
38
39
40
41
42
43
7. SOFTWARE TESTING

Figure 13. Software Testing

Glaucoma Prediction Machine Learning Mini Project Software Testing


In this small project, we will investigate the idea of software testing in the context of machine
learning-based glaucoma prediction. If undiagnosed and untreated, glaucoma is a serious eye
ailment that can result in blindness. A promising method for predicting the likelihood of
developing glaucoma is machine learning, which is based on a variety of data sources, including
patient information and clinical assessments. To assure the accuracy and dependability of the
glaucoma prediction program, testing is crucial.
Machine Learning for Understanding Glaucoma Prediction
Using machine learning, one may forecast the risk that a person would acquire glaucoma by
creating algorithms that can analyze and learn from vast datasets. To find trends and generate
predictions, these algorithms take into account a variety of variables, including patient
demographics, medical history, and test findings.
The Importance of Software Testing
A crucial phase in the creation of glaucoma prediction models is software testing. It makes
ensuring the program works as planned, generates reliable forecasts, and can be trusted by

44
medical practitioners. In this situation, software testing is crucial for the following main reasons:
1. Reliability and accuracy
The accuracy of the glaucoma prediction software's predictions can be confirmed through
testing. We can determine the software's dependability and make sure it gives reliable
information to support clinical decision-making by comparing the output with known outcomes
or expert opinions.
2. Detection and Correction of Errors
Testing enables us to locate and fix any faults or errors in the programme. These mistakes may
result from problems with feature extraction, model training, or data preprocessing. We can
enhance the software's performance and reduce the possibility of inaccurate predictions by
identifying and fixing these problems.
3. Performance Assessment
We can assess the effectiveness of the glaucoma prediction models through software testing.
We can evaluate how well the models generalise to new data by utilising test datasets that
weren't used during the model training phase. This assessment enables us to comprehend the
software's advantages and disadvantages and pinpoint potential areas for development.
4. Resilience and Generalizability
Software for predicting glaucoma must be reliable and generalizable across various patient
demographics and healthcare environments. Testing allows us to determine how the programme
operates under varied conditions, ensuring that it can handle various data types and provide
accurate predictions for use in practical applications.
5. Considerations for Ethics
We can address moral issues and potential biases in glaucoma prediction by using software
testing. It aids in the detection of any biases that can result from uneven datasets or poor feature
selection. We can give equal healthcare options for all people, regardless of their demographic
traits, by assuring fairness in the software's forecasts.
Glaucoma Prediction Software Testing Methods
There are numerous methods that can be used to test glaucoma prediction software. Here are a
few typical examples:

45
7.1 Unit testing
To ensure that the software's discrete parts and functions operate as intended, unit testing is used.
This can involve putting particular algorithms or data preprocessing methods to the test in the
context of glaucoma prediction. Unit testing enhances the overall quality of the software by
assisting in the early detection and correction of faults.
7.2 Integration testing
The main goal of integration testing is to examine how various software modules or components
interact with one another. This can entail examining the interrelationships between the modules
for feature extraction, data preprocessing, and model training in the context of glaucoma
prediction. Integration testing makes sure that all components work together without any issues
and generate reliable forecasts.
System testing is typically performed to discover mistakes caused by unanticipated interactions
between subsystems and system components .Software must be tested to find and fix any
potential mistakes after the source code has been created before being delivered to clients. A
series of test cases must be made with the intention of finding every potential error in order to
detect errors. Numerous software applications can be used to complete this operation. These
methods provide methodical guidance for developing tests that put software components'
internal logic to the test, as well as the input and output domains of a programme, to detect flaws
in its behaviour, performance, and function.To test the software, we employ two methods:
Testing in a White Box: Internal programme logic is tested using this test case design technique.
Black Box testing: This test case design method is used to exercise software requirements. The
largest number of faults can be found using both methods with the least amount of time and
effort.
7.3 Validation Testing
To verify the software's correctness, validation testing involves comparing the software's
predictions against known results or professional judgements. This could involve contrasting
the software's predictions for glaucoma with the actual diagnosis made by medical experts.
Validation testing aids in evaluating the software's effectiveness and propensity to make
accurate predictions.

46
7.4 Performance Evaluation
Performance testing measures how well the software performs under various circumstances,
such as those involving varied data amounts or processing resources. This may entail evaluating
how the glaucoma prediction software handles huge datasets or how it operates in real-time
prediction scenarios. Performance testing makes sure the programme is capable of managing
various situations and making predictions on time.
7.5 User Acceptance Testing
Testing the software from the viewpoint of end users, such as healthcare professionals, is known
as user acceptance testing. The usability, user-friendliness, and degree to which the programme
satisfies the needs of its intended users are all evaluated through this kind of testing. The
software is put through user acceptance testing to make sure it is usable and can be successfully
integrated into clinical workflows.

47
8. RESULTS

We applied various supervised machine learning models to classify glaucoma using retinal
fundus images. The performance of each model was evaluated using the test dataset.

Model Evaluation:
We evaluated the following machine learning models:
1. Logistic Regression
2. Support Vector Machine (SVM)
3. Random Forest
4. Naive Bayes (Gaussian and Multinomial)
5. Decision Tree

Figure 14. Final Result

48
Figure 15. Glaucoma

Figure 16. No Glaucoma

49
9. CONCLUSION AND FUTURE ENHANCE MENTS

9.1 Conclusion
Logistic Regression model achieved an accuracy of approximately 95% on the test data. It
performed better than other models such as SVM, Random Forest, Naive Bayes, and Decision
Tree in terms of accuracy.
The confusion matrix revealed that the model had a higher number of false negatives (12)
compared to false positives (5), indicating that the model had more difficulty in correctly
identifying images with glaucoma (class 1) than images without glaucoma (class 0).
The precision of the model, which measures the ability to correctly classify positive instances
(glaucoma) among all instances predicted as positive, was calculated as 0.84. The recall, which
measures the ability to correctly classify positive instances among all actual positive instances,
was calculated as 0.94. The F1 score, which combines precision and recall, was 0.89. These
metrics indicate a relatively good performance of the model in identifying glaucoma cases.
Different classifiers were tested, including SVM, Random Forest, Logistic Regression, Naive
Bayes (Gaussian and Multinomial), and Decision Tree. Among these models, Logistic
Regression achieved the highest accuracy, while Naive Bayes (Gaussian and Multinomial)
achieved the lowest accuracy.
Additionally, ensemble methods such as AdaBoost with Decision Tree and AdaBoost with
Logistic Regression, as well as Gradient Boosting, were implemented and achieved accuracy
scores of approximately 79% and 80%, respectively. These ensemble methods can be considered
as alternative approaches for glaucoma classification.
In conclusion, the supervised machine learning framework based on Logistic Regression
achieved reasonably good results in classifying glaucoma using retinal fundus images. However,
further refinement and exploration of advanced techniques could lead to improved accuracy and
performance in glaucoma diagnosis.
9.2 Future enhancements

To further enhance this project, the following areas can be considered for future development:
1. Incorporation of Machine Learning: Machine learning techniques can be applied to

50
analyze large datasets and identify patterns that contribute to glaucoma development.
Training predictive models on diverse patient data can improve the accuracy and
specificity of the predictions.
2. Integration with Imaging Technologies: Integrating glaucoma prediction software with
advanced imaging technologies such as optical coherence tomography (OCT) or fundus
photography can enhance the prediction capabilities. Extracting relevant features from
these images and combining them with patient data can provide more comprehensive
assessments.

51
10. BIBLIOGRAPHY

[ 1 ] “Accelerating Retinal Fundus Image Classification Using Artificial Neural Networks


(ANNs) and Reconfigurable Hardware (FPGA)”, Authors:Afran Ghani ,Chan See,Vaisakh
Sudhakaran,Jahanzeb Ahmad, Abd-Alhameed (2020).

[ 2 ] “Detection of Glaucoma on Fundus Images Using Deep Learning on a New Image Set
Obtained with a Smartphone and Handheld Ophthalmoscope”, Authors: Clerimar Paulo ,
Manuel Torres ,Christophe Pinto de Almeida and Luciano [2020],

[ 3 ] “Literature Review on Artificial Intelligence Methods for Glaucoma Screening,


Segmentation, and Classification”, Authors: Alexandre Neto, Camara, Ivan Pires, António
Cunha, Eftim Zdravevski(2020).

[ 4 ] ”Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images”,

Authors: Kazuhiko Hamamoto, Noppadol Maneerat, Noppadol Maneerat, Khin Yadanar


Win[2020]

[ 5 ] “Automated Optic Disc Segmentation Using Basis Splines-Based Active Contour”,


Authors:J. H. Gagan;Harshit S. Shirsat;Yogish S. Kamath;Neetha I. R. Kuzhuppilly;J. R.
Harish Kumar.

[ 6 ] Banister, K., Yang, Y., Zhang, S., Osborn, B., Gill, D., & Tseng, V. L. (2023). Machine
learning models for glaucoma detection: A systematic review and meta-analysis.
Ophthalmology, 130(2), 302-311.

[ 7 ] Convex Representations Using Deep Archetypal Analysis for Predicting Glaucoma


https://ieeexplore.ieee.org/document/9102996

[ 8 ] Lim, G., Kim, J., Lee, S., & Lee, B. (2020). Deep learning-based glaucoma detection using
fundus photographs: A review. Computers in Biology and Medicine, 122, 103824

[ 9 ] Li, S., Yin, X., Zheng, Y., Wang, H., & Zeng, Y. (2020). Predicting glaucoma development
using logistic regression and relevant factors. Frontiers in genetics, 11, 1066.

52
[ 10 ] Reddy, S., Devalla, S. K., & Janarthanan, M. (2020). Glaucoma prediction using logistic
regression and machine learning techniques. 2020 4th International Conference on Trends in
Electronics and Informatics (ICOEI), 213-217.

[ 11 ] Meng, Y., Li, W., & Wang, H. (2020). Glaucoma diagnosis using logistic regression
model and artificial neural network based on fundus images. Journal of Medical Imaging and
Health Informatics, 10(9), 2118-2123.

[ 12 ] Li, Z., & Li, C. (2021). Glaucoma prediction using logistic regression and random forest.
2021 2nd International Conference on Advanced Information Science and System (AISS), 30-
34.

[ 13 ] Lee, J., Jung, Y., Kim, J., Lee, K., & Park, K. H. (2021). Machine learning-based
prediction model for glaucoma using retinal nerve fiber layer thickness and clinical variables.
Translational Vision Science & Technology, 10(1), 3.

[ 14 ] Kim, J., Yoo, T. K., Lee, J. Y., & Kim, J. Y. (2021). Development of a machine learning-
based prediction model for glaucoma using clinical and imaging variables. Journal of Glaucoma,
30(2), 142-148.

[ 15 ] Chen, J., Zhu, W., & You, Q. (2021). Glaucoma diagnosis based on optic disc and cup
segmentation using logistic regression and random forest. Journal of Medical Imaging and
Health Informatics, 11(8), 1688-1692.

[ 16 ] An, M. R., Son, H. G., & Jung, U. C. (2022). Prediction of glaucoma using optic disc
features and machine learning models. PLoS ONE, 17(1), e0262100.

[ 17 ] Meng, Y., Wang, H., & Li, W. (2022). Glaucoma diagnosis using logistic regression and
extreme learning machine based on fundus images. Journal of Medical Imaging and Health
Informatics, 12(6), 1354-1359.

[ 18 ] Park, J., Kim, J. S., Jeon, H. L., & Lee, S. H. (2022). Deep learning-based glaucoma
prediction model using fundus photographs. Journal of Medical Imaging and Health Informatics,
12(8), 1710-1714.

53

You might also like