DSCC Net Multi-Classification Deep Learning Models
DSCC Net Multi-Classification Deep Learning Models
Article
DSCC_Net: Multi-Classification Deep Learning Models for
Diagnosing of Skin Cancer Using Dermoscopic Images
Maryam Tahir 1,† , Ahmad Naeem 2,† , Hassaan Malik 1,2 , Jawad Tanveer 3 , Rizwan Ali Naqvi 4, *,†
and Seung-Won Lee 5, *
1 Department of Computer Science, National College of Business Administration & Economics Lahore,
Multan Sub Campus, Multan 60000, Pakistan
2 Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
3 Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
4 Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
5 School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
* Correspondence: rizwanali@sejong.ac.kr (R.A.N.); swleemd@g.skku.edu (S.-W.L.)
† These authors contributed equally to this work.
Simple Summary: This paper proposes a deep learning-based skin cancer classification network
(DSCC_Net) that is based on a convolutional neural network (CNN) and implemented on three
publicly available benchmark datasets (ISIC 2020, HAM10000, and DermIS). The proposed DSCC_Net
obtained a 99.43% AUC, along with a 94.17% accuracy, a recall of 93.76%, a precision of 94.28%,
and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The
accuracies of ResNet-152, Vgg-19, MobileNet, and Vgg-16, EfficientNet-B0, and Inception-V3 are
89.68%, 92.51%, 91.46%, 89.12%, 89.46%, and 91.82%, respectively. The results showed that the
proposed DSCC_Net model performs better as compared to baseline models, thus offering significant
support to dermatologists and health experts to diagnose skin cancer.
Abstract: Skin cancer is one of the most lethal kinds of human illness. In the present state of the
health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed
Citation: Tahir, M.; Naeem, A.; Malik,
initially then it can be threatening to human life. To attain a high prospect of complete recovery, early
H.; Tanveer, J.; Naqvi, R.A.; Lee, S.-W.
detection of skin cancer is crucial. In the last several years, the application of deep learning (DL)
DSCC_Net: Multi-Classification
algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work
Deep Learning Models for
Diagnosing of Skin Cancer Using
intended to build a multi-classification technique for diagnosing skin cancers such as melanoma
Dermoscopic Images. Cancers 2023, (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN).
15, 2179. https://doi.org/10.3390/ In this paper, we have proposed a novel model, a deep learning-based skin cancer classification
cancers15072179 network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on
three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin
Academic Editors: Mario Mascalchi
cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with
and Stefano Diciotti
six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and
Received: 7 March 2023 MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this
Revised: 4 April 2023 dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of
Accepted: 4 April 2023
93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin
Published: 6 April 2023
cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0,
and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results
showed that our proposed DSCC_Net model performs better as compared to baseline models, thus
Copyright: © 2023 by the authors. offering significant support to dermatologists and health experts to diagnose skin cancer.
Licensee MDPI, Basel, Switzerland.
This article is an open access article Keywords: skin cancer; melanoma; deep learning; transfer learning; CNN; dermoscopic images
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1. Introduction
The largest organ in the body is the skin, which saves the body from infection, heat, and
UV light, but the serious threat to human life is cancer. The human body may harbor various
kinds of cancer, and skin cancer is one of the deadliest and rapidly growing tumors. One
in every three cancers diagnosed is skin cancer and, according to Skin Cancer Foundation
Statistics, one in every five Americans will develop skin cancer in their lifetime [1–4]. In the
USA, there are more than 3.5 million new cases that appear every year, and that number of
cases is continuously increasing [3].
Many skin cancers begin in the upper layer of the skin. Skin cancer occurs when
skin cells divide and expand in an uncontrolled way. New skin cells usually develop
when old ones die or are damaged. When this process does not work correctly, cells grow
quickly in an unordered way. This is why these cells are known as a tumor, which is in
the form of a group of tissue [5,6]. It is caused by several factors, such as drinking alcohol,
smoking, allergies, viruses, changing environments, and ultraviolet (UV) light exposure.
Furthermore, skin cancer can also appear due to abnormal swellings on the body.
There are four different types of skin cancer: melanoma (MEL), melanocytic nevi (MN),
basal cell carcinoma (BCC), and squamous cell carcinoma (SCC). The most dangerous
category of cancer is MEL, because it spreads quickly to other organs. It arrives from the
skin cells that are called melanocytes. On the skin, melanocytes create dark pigments, and
these are mostly black and brown, while some are red, purple, and pink [7]. A melanoma
cell frequently spreads to another organ, such as the brain, liver, or lungs [8,9]. Due to
melanoma cancer, 10,000 deaths occur annually in the United States [10]. If it is identified
early, then melanoma can be treated as soon as possible. It is not more common than other
kinds of skin cancer. Melanocytic nevi (MN) happen in a pigmented mole that varies in a
variety of skin tone colors. It mostly occurs throughout childhood and the early years of
adult life, because the number of moles on one’s body increases up until the 30 to 40 years
of age. Basal cell carcinoma (BCC) is the most common type of skin cancer. These are
round cells that are created in the lower portion of the epidermis and normally grow slowly.
Approximately all BCC develops on areas of the body that have a lot of sun exposure,
including the face, neck, head, ears, back, and shoulders. Rarely, this type of skin cancer
migrates to other body areas, and forms due to the abnormal and uncontrolled growth of
cells. It may occur as a small, flesh-colored, or white tumor that may bleed. Squamous cell
carcinoma (SCC) comprises flat cells found in the upper portion of the epidermis. These
cancer cells can arise when cells grow uncontrollably. It may occur as a hard red mark
or open sore that may bleed easily. Although this type of skin cancer is not normally
dangerous, SCC can be found in numerous areas because it is usually generated by sun
exposure. Additionally, it may also develop on skin that has already been burned or
harmed by chemicals.
Skin cancer detection is a challenging process, and there are many different ways in
which doctors can find skin cancer. An experienced dermatologist uses a sequence of steps
to make a diagnosis, beginning with the naked eye detection of abnormal tumors, followed
by dermoscopy, which uses a magnifying lens to conduct an in-depth analysis of lesion pat-
terns, and the final step is biopsy [11,12]. Before the development of dermoscopic pictures,
most skilled dermatologists had a rate of success of only 60 percent in diagnosing skin can-
cer, but dermoscopic images raised success rates to between 75 percent and 84 percent [13].
Additionally, correct identification is unique and largely dependent on the skills of the
clinician [14]. The manual diagnosis of skin disorders is extremely difficult and stressful
for the patient [15]. Computer-aided detection systems support health professionals to
evaluated data garnered from dermoscopy method in situations where there is a shortage
of professional availability or diagnostic expertise [16,17].
Skin cancer is a huge problem that needs to be investigated as soon as possible. The
majority of people do not visit their dermatologist on a regular basis, which causes a fatally
delayed diagnosis. The diagnosis is a manual process that takes a lot of time and money.
However, diagnosis improved due to machine learning, and this can be useful in various
Cancers 2023, 15, 2179 3 of 28
ways. Skin cancer classification has been worked out using machine learning techniques,
such as the support vector machine (SVM) [18], the Naïve Bayes (NB) classifier [19], and
decision trees (DT) [20]. Convolutional neural networks (CNN) have gained popularity in
recent years due to their ability to perform automatic feature extraction [21–24], as well as
their broad use in research [25–28]. They are used to detect cancerous cells more rapidly
and effectively.
The mortality rates are rising to alarming levels, yet if patients are detected and treated
promptly, their chances of surviving are better than 95% [29–34]. Thus, this motivates us to
develop a model for the early diagnosis of skin cancer to save human lives. In this paper,
we present a novel multi-classification model, called the deep learning-based skin cancer
classification network (DSCC_Net), based on the CNN, that identifies the four types of
skin cancer, MEL, MN, BCC, and SCC, from dermoscopic images. Most of the research
studies [29–33] have indicated great performance in binary classification, i.e., differentiating
between benign and malignant skin cancer. However, no evidence has been found that
uses the DL models for the classification of the skin cancers MEL, BCC, MN, and SCC.
Additionally, DSCC_Net iwas also compared with six baseline classifiers: Vgg-19, Vgg-16,
ResNet-152, EfficientNet-B0, Inception-V3, and MobileNet. The major contributions of this
study are presented below:
• The novel proposed DSCC_Net model is designed to identify four different types of
skin cancer. The proposed model has the capability of extracting dominant features
from dermoscopy images that can assist in the accurate identification of the disease.
• In this study, we reduce the complexity of the model by decreasing the number of
trainable parameters to obtain a significant classifier.
• The CNN model’s accuracy is compromised as a result of the problem of class imbal-
ance in medical datasets. We overcome this issue by using an up-sampling technique,
SMOTE Tomek, to obtain concoction samples of the image at each class to gain en-
hanced accuracy.
• The Grad-CAM heat-map technique is utilized to illustrate the visible features of skin
cancer disease classification approaches.
• The proposed model achieved superior results, as compared to six baseline classifiers,
Vgg-19, ResNet-152, Vgg-16, MobileNet, Inception-V3, and EfficientNet-B0, in terms of
many evaluation metrics, i.e., accuracy, area under the curve (AUC), precision, recall,
loss, and F1 score.
• Additionally, the proposed model also produced significant results as compared to the
recent state-of-the-art classifiers.
This study is divided into the following section: Section 2 presents the literature review.
Materials and methods are discussed in Section 3. The experimental results and discussion
are presented in Section 4. This study is concluded in Section 5.
2. Literature Review
Extensive research has been conducted on the diagnosis of skin cancer to better assist
medical professionals in the process of detecting the disease at an earlier stage. Recent
research, on the other hand, has been focused on developing different artificial intelligence
algorithms to automate the diagnosis of several types of skin cancer. Table 1 presents the
summary of recent literature on skin cancer diagnosis using DL models.
Cancers 2023, 15, 2179 4 of 28
Table 1. Summary of the existing research studies for the diagnosis of skin cancer, using different
machine learning and DL models.
benign melanoma, Kaur et al. [29] suggested an automatic melanoma classifier that was
based on a deep CNN. The main goal was to suggest a lightweight and less-complicated
deep CNN than other techniques, in order to efficiently identify melanoma skin tumors.
The ISIC datasets were used to obtain dermoscopic pictures for this study that included
several cancer samples such as ISIC 2016, ISIC 2017 and ISIC 2020. In terms of the ISIC
2016, 2017 and 2020 datasets, the suggested deep CNN classifier acquired accuracy rates of
81.41 %, 88.23 %, and 90.42 %.
Alwakid et al. [38] employed the CNN model and modified ResNet-50, which was
applied to a HAM10000 dataset. This analysis used an uneven sample of skin cancer.
Initially, the image’s quality was improved using ESRGAN, then the next step taken to
tackle the problem of class imbalance was the use of augmenting data. They achieved
the result by using the CNN and ResNet-50 models, which were 86% and 85.3% accurate,
respectively. Aljohani et al. [39] used CNN to perform binary classification for the detection
of melanoma skin tumors. They used the ISIC 2019 dataset to test various CNN architectures
for this purpose. The results of the experiment showed that GoogleNet achieved the
maximum level of performance on both the training and testing data, in which they
obtained 74.91% and 76.08% accuracies. Rashid et al. [30] used MobileNet-V2 to present a
deep transfer learning network for the classification of melanoma. The MobileNet-V2 was a
deep CNN that distinguished between malignant and benign skin lesions. The performance
of the suggested DL model had been analyzed using the dataset of ISIC 2020. To solve the
class imbalance problem, different data augmentation strategies were used. Ali et al. [40]
applied EfficientNets B0-B7 models to the HAM10000 dataset of dermatoscopic images.
The dataset contained 10015 images associated with seven different types of skin cancer,
such as actinic keratosis (AKIEC), dermatofibrosarcoma (DF), non-vascular (NV), BCC,
MEL, benign keratosis (BKL) and vascular skin lesions (VASC). Among the eight models,
the EfficientNet-B4 represented the greatest Top-1 and Top-2 accuracies. In this experiment,
the EfficientNet-B4 model achieved an 87% F1 score and 87.91% Top-1 accuracy.
Shahin-Ali et al. [31] used a deep CNN model by using the HAM10000 dataset. This
data contained 6705 benign images, 1113 malignant images, and 2197 unknown images
of lesions. The proposed model attained the highest training and testing accuracies, with
93.16 % and 91.93%, respectively. Furthermore, they balanced the dataset for both classes,
which increased the accuracy of categorization. On the same dataset, they also trained
several transfer learning models, but the results were not better than their proposed model.
Le et al. [44] introduced a transfer learning model that comprised ResNet-50 without the
use of a preprocessing stage or manual selection of features. All layers of the pre-trained
ResNet-50 were used for the training in Google Colab. Global average pooling and dropout
layers were employed to reduce overfitting. The images of the dataset were divided into
seven different categories and the proposed model attained 93% accuracy. Bajwa et al. [41]
created an ensemble model through the use of ResNet-152, SE-ResNeXt-101, DenseNet-161,
and NASNet, to classify seven types of skin cancer with 93% accuracy. The ensemble
was a technique of ML that merges the results of various distinctive learners to improve
classification performance. Nugroho et al. [42] used the HAM10000 dataset to create a
custom CNN for skin cancer identification. They used a scaled image with a resolution of
90 × 120 pixels. They achieved an 80% accuracy for training and 78% accuracy for testing.
Bassi et al. [45] used a DL technique that included transfer learning and fine-tuning.
They resized the dataset images with the resolution of 224 × 224 and used a fine-tuned
Vgg-16 model. They attained an accuracy of 82.8 %. Moldovan et al. [43] used a technique
that was based on DL and transfer learning, in which they applied the HAM10000 dataset.
The classification model was created in Python, utilizing the PyTorch library and a two-step
process for classifying images of skin cancer. The first step’s prediction model was 85.0%
accurate, and the second step’s prediction model was 75.0% accurate. Using dermoscopic
images, Çevik et al. [46] employed the VGGNET model that contained a powerful CNN
model to identify seven various kinds of disease. Images that were 600 × 450 pixels in size
were analyzed and resized to 400 × 300 pixels. Sklearn, Tensorflow and Keras machine
Cancers 2023, 15, 2179 6 of 28
learning packages all were used in this Python-coded application. They obtained a score
of 85.62 percent accuracy. Hasan et al. [47] developed the CNN-based detecting system
that used feature extraction techniques to extract features from dermoscopic pictures.
During the testing phase, they obtained an accuracy of detection of 89.5 %. However, the
detection accuracy was insufficient and needed to be improved. Furthermore, there was
overfitting between the testing and training stages, which was a flaw in that study. Saba
et al. [31] suggested a deep CNN that used three phases to detect skin lesions: first, the
color modification was used to improve contrast; second, a CNN approach was applied to
extract the borders of the lesion; third, transfer learning was applied to remove the deep
features. While the strategy produced good results for some datasets, the outcomes varied
depending on the dataset.
Using the dataset of ISIC 2018, Majtner et al. [48] created an ensemble of GoogleNet
and Vgg-16 models. The authors performed the data augmentation and normalized its color
to build the ensemble approaches they offered. The accuracy of the suggested method was
80.1%. Alquran et al. [33] introduced an image-processing-based approach for detecting,
extracting, and classifying tumors from dermoscopy pictures, which aided in the diagnosis
of benign and melanoma skin cancer significantly. The SVM classifier’s results showed an
accuracy of 92.1%. Lopez et al. [49] described a deep-learning-based strategy to handle
the problem of identifying a dermoscopic image that included a skin tumor as malignant
and benign, with a focus on the difficulty of skin cancer classification, especially initial
melanoma detection. The proposed solution employed the transfer learning approach that
was based on the VGGNet CNN architecture. The proposed method obtained an accuracy
level of 81.3% in the ISIC dataset, according to encouraging testing results. A linear classifier
was built by Kawahara et al. [50] using a dataset of 1300 pictures and features collected
by CNN to detect skin cancer. The method does not need skin lesion segmentation or
preprocessing. They conducted classifications of five and ten classes, and their respective
accuracy rates were 85.8% and 81.9%. Codella et al. [51] employed sparse coding, SVM,
and deep learning to obtain an accuracy of 93.1% when evaluating recorded photos from
the ISIC. These images were represented by bkl, mel, and nv. Krishnaraj et al. [52] designed
machine learning [53–56] classifiers that identified binary classes of cervical cancer, such
as adenosquamous carcinoma and SCC. They collected the dataset at the University of
California, Irvine (UCI) repository, and the Borderline-SMOTE approach was employed
to balance the unbalanced data. They obtained 98% accuracy through this dataset. Imran
et al. [57] proposed a model that was based on deep CNN by using different layers and filter
sizes. They used three different publicly available datasets: ISIC-2017, ISIC-2018, and ISIC-
2019. In the ISIC-2017 dataset, they employed 2750 images that consisted of three labels:
MEL, BKL, and NV. The ISIC-2018 dataset contains seven labels, in which 10,015 images
were used, whereas the ISIC-2019 dataset implemented eight labels that contain a total
number of 25,331 images. The accuracy rate of the ISIC-2017 dataset was 93.47%, while
88.75% and 89.58% accuracies were achieved by ISIC-2018 and ISIC-2019, respectively.
According to the above literature, it is extremely clear that a need still exists for a
model with the ability detect the four different types of skin cancer with greater accuracy
than current modalities. Although [29–31,39,47,49] performed a binary class classification
of skin cancer, many other researchers were not able to handle multiclass classification
with more successful outcomes. For multiclass skin cancer detection, the previous methods
proposed in [40–48] were also unsuccessful at attaining a greater accuracy. Automated skin
cancer classification in dermoscopic images is a challenging task due to high intraclass
variance and interclass visual similarity. Furthermore, the presence of external and inherent
artifacts and contrast between the affected and normal skin make it extremely difficult
for the multiclassification of skin cancers. The proposed method overcomes the existing
challenges, and effectively classifies the lesion into the four primary classes of skin cancer,
MEL, SCC, BCC, and MN, with high efficiency.
Cancers 2023, 15, 2179 7 of 28
Figure 2. Original image samples of skin cancer extracted from three datasets.
datasets.
3.4. Proposed
Table 2. ImageModel
samples of skin cancer are distributed before up-sampling.
This section contains a complete description of the proposed DSCC_Net model.
No. of Classes Class Name No. of Images
Table 3. Image samples of the Skin Cancer dataset are distributed after up-sampling.
invariance, a CNN can identify the same feature in multiple images regardless of where
it occurs in the images [71–73]. In this study, we developed a robust DSCC_Net based on
the CNN model to accurately classify skin cancer diseases. The DSCC_Net model consists
of 5 convolutional blocks, and also includes a Rectified Linear Unit (ReLU) activation
function, 1 dropout layer, 2 dense layers, and a softmax classification layer, as illustrated
in Figure 4. Table 3 provides an overview of the dataset after the up-sampling technique,
while a detailed explanation of the suggested DSCC_Net model for the categorization
Cancers 2023, 15, x FOR PEER REVIEW 11 of 29 of
skin cancer with the succeeding layers is presented in Table 4.
𝐹𝑖𝑙𝑡𝑒𝑟
Filter 𝑆𝑖𝑧𝑒(𝐹𝑆)
Size ( FS) = =
f w 𝑓×
𝑤×f h 𝑓ℎ (1)
(1)
where fw denotes the width of the filter and fh denotes the height of the filter. In our study,
where fw denotes the width of the filter and fh denotes the height of the filter. In our study,
weset
we setthe
thesize
sizeof
ofthe
thefilter
filtertoto3,3,so
soEquation
Equation(1)
(1)becomes
becomesFS
FS==33×× 3.
3. Feature
Feature identifiers
identifiers are
are
another name for these filters, and enable us to understand low-level visual aspects,
another name for these filters, and enable us to understand low-level visual aspects, such such
as edges
as edgesand
andcurves
curves[74].
[74].
3.4.3. Flattened
3.4.3. Flattened Layer
Layer
This layer
This layer is
is located
located among
among the
the convolution
convolution and
and dense
dense layers.
layers. Tensor
Tensor data
data types
types are
are
usedas
used asinputs
inputsfor forthe
the convolution
convolution layers,
layers, whereas
whereas dense
dense layers
layers demand
demand a one-dimen-
a one-dimensional
sional layout.
layout. So, the So, the flattened
flattened layer
layer was was applied
applied to translate
to translate the two-dimensional
the two-dimensional image
image repre-
representation into a one-dimensional input, which is presented
sentation into a one-dimensional input, which is presented in Figure 5. in Figure 5.
Figure5.5. The
Figure The fundamental
fundamental structure
structureof
ofthe
theflattened
flattenedlayer.
layer.
3.4.4.
3.4.4. Dropout
Dropout Layer
Layer
Our
Our modelutilized
model utilizedthis layer
this with
layer a dropout
with valuevalue
a dropout of 0.2.ofThis
0.2.value
This was
valueimplemented
was imple-
in order to prevent the overfitting of our proposed DSCC_Net
mented in order to prevent the overfitting of our proposed DSCC_Net model model [74]. The [74].
purpose
The
of this layer was to switch units on and off to decrease the model’s training
purpose of this layer was to switch units on and off to decrease the model’s training time andtime
the
complexity of the model. Consequently, the model learns the relevant features.
and the complexity of the model. Consequently, the model learns the relevant features.
3.4.5. Dense Block of Proposed DSCC_Net
3.4.5. Dense Block of Proposed DSCC_Net
In this research, we apply 2 dense blocks that consist of an activation function, which
In this research, we apply 2 dense blocks that consist of an activation function, which
is explained in the following sections.
is explained in the following sections.
ReLU Function
ReLU Function
Activation
Activation functions,
functions, which
which are
are mathematical
mathematical processes,
processes, determine
determine whether
whether or
or not
not
neural output should be passed on to the next layer. In general, they enable and disable
neural output should be passed on to the next layer. In general, they enable and disable the
network nodes.
the network Many
nodes. activation
Many functions
activation are used
functions are in DLin
used classifiers, but webut
DL classifiers, applied ReLU
we applied
ReLU due to its uncomplicated and time-saving computation. The activation of ReLU
works by replacing all negative outcomes with zero. This activation function was used on
the outputs of the convolutional layer.
Dense Layer
Cancers 2023, 15, 2179 12 of 28
due to its uncomplicated and time-saving computation. The activation of ReLU works
by replacing all negative outcomes with zero. This activation function was used on the
outputs of the convolutional layer.
Dense Layer
The dense layer accepts a single matrix as input and generates output according to
its characteristics. In these layers, images are identified and given a class label. A dense
layer with 4 neurons and a SoftMax activation function is responsible for generating the
model’s final output, which classifies the image into one of the four skin cancer disease
classes: MEL, BCC, SCC, and MN. SoftMax is applied after a few layers; this is a probability-
based activation function in which the total amount of classes represents the number of
neurons [69]. The total number of parameters is 1,149,524, which is split into two groups:
1,149,524 trainable parameters, and zero non-trainable parameters.
TP + TN
Accuracy = (2)
TP + FN + FP + TN
TP
Precision = (3)
TP + FP
TP
Recall = (4)
TP + FN
Precision × Recall
F1 − score = 2 × (5)
Precision + Recall
Figure 6.6.Remarkable
Figure Remarkableaccuracy
accuracy improvement
improvement withwith or without
or without SMOTESMOTE
TomekTomek
in the in the proposed
proposed model
model compared to other baseline deep networks; (a) Vgg-16, (b) Vgg-19, (c) EfficientNet-B0, (d)
compared to other baseline deep networks; (a) Vgg-16, (b) Vgg-19, (c) EfficientNet-B0, (d) ResNet-152,
ResNet-152, (e) Inception-V3, (f) MobileNet, (g) Proposed Model with SMOTE Tomek,
(e) Inception-V3, (f) MobileNet, (g) Proposed Model with SMOTE Tomek, and (h) Proposed Modeland (h) Pro-
posed Model
without SMOTE without SMOTE Tomek.
Tomek.
Figure7.7. Results
Figure Results of
of the
the proposed
proposed DSCC_Net
DSCC_Netmodel
modelwith
withand
andwithout
withoutup-sampling;
up-sampling;(a)(a)Vgg-16,
Vgg-16, (b)
(b) Vgg-19,
Vgg-19, (c)(c) EfficientNet-B0,(d)
EfficientNet-B0, (d) ResNet-152,
ResNet-152, (e)
(e)Inception-V3,
Inception-V3,(f)(f)
MobileNet, (g) Proposed
MobileNet, Model
(g) Proposed Model
withSMOTE
with SMOTETomek,Tomek, and
and (h)(h) Proposed
Proposed Model
Model without
without SMOTE
SMOTE Tomek.Tomek.
Cancers 2023,
Cancers 2023, 15,
15,2179
x FOR PEER REVIEW 16
15 of 29
of 28
Figure 8. Precision
Figure 8. Precision results
resultsof
ofthe
theproposed
proposedmodel,
model,DSCC_Net,
DSCC_Net, and
andother baseline
other models;
baseline (a)(a)
models; Vgg-16,
Vgg-
16, Vgg-19,
(b) (b) Vgg-19, (c) EfficientNet-B0,
(c) EfficientNet-B0, (d) ResNet-152,
(d) ResNet-152, (e) Inception-V3,
(e) Inception-V3, (f) MobileNet,
(f) MobileNet, (g) Proposed
(g) Proposed Model
Model
with with SMOTE
SMOTE Tomek, Tomek,
and (h) and (h) Proposed
Proposed Model without
Model without SMOTESMOTE
Tomek. Tomek.
Cancers 2023,
Cancers 2023, 15,
15, 2179
x FOR PEER REVIEW 1716of
of 29
28
4.6. F1-Score
4.6. F1-Score Comparison
Comparison with
with Recent
Recent Deep
DeepModel
Model
Theproposed
The proposedDSCC_Net
DSCC_Net model
model withwith
SMOTESMOTE
TomekTomek and DSCC_Net
and DSCC_Net without
without SMOTE
SMOTEachieved
Tomek Tomek theachieved thevalues
F1-score F1-score values and
of 93.93% of 93.93%
58.09%,and 58.09%, respectively.
respectively. Addi-
Additionally, the
tionally,
six themodels,
baseline six baseline models, ResNet-152,
ResNet-152, EfficientNet-B0,
EfficientNet-B0, Vgg-19, Inception-V3,
Vgg-19, Inception-V3, Mo-
MobileNet and
bileNet attained
Vgg-16, and Vgg-16, attained
the F1-score the of
values F1-score
89.27%,values
89.31%,of91.71%,
89.27%, 89.31%,
91.76%, 91.71%,
92.17%, and 91.76%,
91.13%,
respectively,
92.17%, and as illustrated
91.13%, in Figure
respectively, as 10. The suggested
illustrated in FigureDSCC_Net model attained
10. The suggested the
DSCC_Net
highest F1-scorethe
model attained value with F1-score
highest SMOTE Tomek shown
value with in Figure
SMOTE Tomek10. shown in Figure 10.
4.7. Comparison of Proposed Model with Other Models Using Loss Loss
Loss functions are responsible for calculating the numerical difference between the
predicted and
predicted and actual values. In this study, a categorical cross-entropy method was utilized
to calculate the loss. When the model was trained using up-sampled photos, however, the
remarkable. The proposed DSCC_Net model with and without SMOTE
results were more remarkable.
Tomek
Tomek attained
attainedthetheloss
lossvalues
valuesofof0.1677%
0.1677% and 0.4332%,
and whereas
0.4332%, whereasResNet-152, EfficientNet-
ResNet-152, Efficient-
B0, Vgg-19,
Net-B0, MobileNet,
Vgg-19, MobileNet, Vgg-16, and
Vgg-16, Inception-V3
and Inception-V3achieved
achievedthe
theloss
lossvalues
values of
of 0.2613%,
0.2896%, 0.2353%,
0.2353%, 0.2525%,
0.2525%,0.2279
0.2279and and0.2189,
0.2189, respectively.
respectively. Figure
Figure 11 11 shows
shows the the major
major en-
enhancement
hancement in inthethe loss
loss value
value of of
thethe suggested
suggested DSCC_Net
DSCC_Net model
model with
with SMOTE
SMOTE Tomek.
Tomek.
Lossvalue
Figure 11. Loss valueofofthe
theproposed
proposedDSCC_Net
DSCC_Net model
model and
and other
other baseline
baseline models;
models; (a) Vgg-16,
(a) Vgg-16, (b)
Vgg-19,
(b) (c) (c)
Vgg-19, EfficientNet-B0,
EfficientNet-B0,(d)(d)
ResNet-152,
ResNet-152,(e)(e)Inception-V3,
Inception-V3,(f)
(f)MobileNet,
MobileNet,(g)
(g) Proposed
Proposed Model
with SMOTE
with SMOTE Tomek,
Tomek, andand (h)
(h)Proposed
ProposedModel
Modelwithout
withoutSMOTE
SMOTETomek.
Tomek.
Cancers 2023,
Cancers 2023, 15,
15, 2179
x FOR PEER REVIEW 2019of
of 29
28
4.8.
4.8. ROC
ROC Compared
Compared with
with Recent
Recent Model
Model
ROC
ROC iwa performed to evaluate the
iwa performed to evaluate effectiveness of
the effectiveness the diagnostic
of the diagnostic tests
tests and,
and, most
most
specifically, the reliability of the binary or multi-classifier. A receiver operating
specifically, the reliability of the binary or multi-classifier. A receiver operating charac- character-
istic (ROC)
teristic curve’s
(ROC) AUC
curve’s AUCis used to evaluate
is used the effectiveness
to evaluate of a classifier;
the effectiveness a higher
of a classifier; AUC
a higher
indicates
AUC that the
indicates classifier
that is moreiseffective.
the classifier Using the
more effective. dataset,
Using thewe evaluated
dataset, the reliability
we evaluated the
of our proposed
reliability DSCC_NetDSCC_Net
of our proposed model in model
terms ofin the ROC
terms curve,
of the ROC both withboth
curve, andwith
without
and
SMOTE SMOTE
without Tomek. Tomek.
This curve
Thiswas
curveused
wastoused
compare the proposed
to compare DSCC_Net
the proposed model,
DSCC_Net with
model,
and without
with SMOTE
and without Tomek,
SMOTE to sixto
Tomek, baseline models
six baseline on theonsame
models dataset.
the same The suggested
dataset. The sug-
DSCC_Net with and
gested DSCC_Net without
with SMOTE
and without Tomek,
SMOTE Vgg-19,
Tomek, Inception-V3,
Vgg-19, and MobileNet.
Inception-V3, Res-
and MobileNet.
Net-152, Vgg-16,
ResNet-152, and EfficientNet-B0
Vgg-16, and EfficientNet-B0 attained ROC values
attained of 0.9861,
ROC values 0.9145, 0.9145,
of 0.9861, 0.9711, 0.9742,
0.9711,
0.9818,
0.9742, 0.9778,
0.9818, 0.9759
0.9778,and 0.9572,
0.9759 and respectively, as shown
0.9572, respectively, asin Figurein12.
shown In the12.
Figure ROC curve,
In the ROC a
curve, a significant
significant enhancement
enhancement of the suggested
of the suggested DSCC_Net DSCC_Net
model’s model’s performance,
performance, with SMOTEwith
SMOTE can
Tomek, Tomek, can beinvisible
be visible Figurein12.Figure 12.
Figure 13.
Figure 13. AU(ROC)
AU(ROC) curve
curveevaluation
evaluationwith
withextension
extensionfor
forthe
theproposed model
proposed model and other
and models;
other (a)
models;
Vgg-16, (b) Vgg-19, (c) EfficientNet-B0, (d) ResNet-152, (e) Inception-V3, (f) MobileNet, (g) Pro-
(a) Vgg-16, (b) Vgg-19, (c) EfficientNet-B0, (d) ResNet-152, (e) Inception-V3, (f) MobileNet, (g) Pro-
posedModel
posed Modelwith
withSMOTE
SMOTETomek,
Tomek,andand(h)
(h)Proposed
ProposedModel
Modelwithout
withoutSMOTE
SMOTETomek.
Tomek.
Cancers 2023, 15,
Cancers 2023, 15, 2179
x FOR PEER REVIEW 21 of
22 of 28
29
4.10. Comparison of
4.10. Comparison of DSCC_Net
DSCC_Net with
with Six
Six Models
Models Using
Using aa Confusion
Confusion Matrix
Matrix
To
To validate
validate our
our suggested
suggested DSCC_Net
DSCC_Net model
model with
with aa confusion
confusion matrix,
matrix, we
we compared
compared
it
it with
with six
six models. The use
models. The use of
of SMOTE Tomek results
SMOTE Tomek results in
in significant
significant improvements
improvements for
for the
the
DSCC_Net model, as presented in Figure
DSCC_Net model, as presented in Figure 14. 14.
Figure 15. Grad-CAM evaluation of the proposed DSCC_Net model for skin cancer diseases.
Figure 15. Grad-CAM evaluation of the proposed DSCC_Net model for skin cancer diseases.
Table 6.
Table 6. Comparison
Comparison of
of the
the DSCC_Net
DSCC_Net model
model with
with recent
recent state-of-the-art
state-of-the-artstudies.
studies.
Ref
Ref Year
Year Model
Model Datasets
Datasets Accuracy
Accuracy Recall
Recall Precision
Precision F1-Score
F1-Score
[70]
[70] 2023
2023 CNN
CNN ISIC-2017
ISIC-2017 92.00%
92.00% 91.90%
91.90% 91.65%
91.65% 91.99%
91.99%
[71] 2023 Vgg-13 ISIC-2019, Derm-IS 89.57% 90.70% 89.66% 89.65%
[71]
[72]
2023
2023
Vgg-13
Deep Belief Network
ISIC-2019, Derm-IS
HAM-10000
89.57%
93.00%
90.70%
92.91%
89.66%
92.45%
89.65%
92.65%
[72]
[73] 2023
2021 Deep Belief Network
ConvNet HAM-10000
ISIC-2018, Derm-IS 93.00% 86.90% 92.91%
86.14% 92.45%
87.47% 92.65%
-
[74] 2022 2D superpixels + RCNN HAM-10000 85.50% 83.40% 84.50% 85.30%
[73]
[75] 2021
2021 ConvNet
ResNeXt101 ISIC-2018,ISIC-2019
Derm-IS 86.90%
88.50% 86.14%
87.40% 87.47%
88.10% -
88.30%
[76]
[74] 2022
2022 SCDNet
2D superpixels + RCNN ISIC-2019
HAM-10000 92.91%
85.50% 92.18%
83.40% 92.19%
84.50% 92.18%
85.30%
ISIC-2020, Derm-IS,
Ours
[75] -
2021 DSCC_Net with SMOTE Tomek
ResNeXt101 ISIC-2019
HAM-10000 94.17% 87.40%
88.50% 94.28% 93.76%
88.10% 93.93%
88.30%
[76] 2022 SCDNet ISIC-2019 92.91% 92.18% 92.19% 92.18%
DSCC_Net 4.12.
with Discussions
SMOTE ISIC-2020, Derm-IS,
Ours - 94.17% 94.28% 93.76% 93.93%
Tomek The identificationHAM-10000
and categorization of a wide range of skin cancers may be accom-
plished with the use of dermoscopy photographs [32–35]. Our method offers a full view of
a4.12. Discussions
particular site, which enables us to identify the disease, as well as interior areas that have
beenTheinfected with it. Dermoscopy
identification is the most
and categorization reliable
of a wide [41]ofand
range time-effective
skin cancers may[52–59] ap-
be accom-
proach for determining if a lesion is a BCC, MEL, SCC, or MN. A computerized
plished with the use of dermoscopy photographs [32–35]. Our method offers a full view diagnostic
approach is required
of a particular to identify
site, which enables BCC,
us toMEL, SCC,the
identify anddisease,
MN, since the number
as well of confirmed
as interior areas that
cases of deadly
have been skinwith
infected cancer is continuallyisgrowing
it. Dermoscopy the most[62]. Dermoscopy
reliable images might [52–59]
[41] and time-effective be able
to automatically
approach differentiate
for determining if a between
lesion is athose
BCC,whoMEL, have
SCC, MEL and A
or MN. those who have other
computerized diag-
types
nostic of skin cancer,
approach by usingtomethods
is required identify from
BCC,the fieldSCC,
MEL, of DLand[64–72]. As a direct
MN, since result of
the number of
this, we developed a DSCC_Net model that is based on DL and
confirmed cases of deadly skin cancer is continually growing [62]. Dermoscopy imagesis capable of accurately
Cancers 2023, 15, 2179 24 of 28
diagnosing a wide variety of skin diseases. These diseases include BCC, MEL, SCC, and
MN, and the model enables dermatologists to begin treatment for their patients at an
earlier stage. The three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000,
and DermIS) were used to evaluate the performance of the proposed DSCC_Net model.
The results of the proposed model were compared with six baseline models: ResNet-152,
Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. The obtained image from
datasets is imbalanced as discussed in Table 2. The imbalanced class of the images affected
the performance of the model at the time of training [77–82]. To overcome these issues, we
used the SMOTE Tomek technique to increase the numbers of images in the minority class
of the datasets [49]. According to Figure 6, our proposed DSCC_Net model has received
sufficient training on the four subtypes of skin cancer (BCC, MEL, SCC, and MN), and it can
correctly identify occurrences of infection with these subtypes. Compared to the other six
baseline skin cancer classifiers, our DSCC_Net model performs much better in classifying
skin cancers, as discussed in Table 5. The DSCC_Net model using the SMOTE Tomek tech-
nique obtained an accuracy of 94.17%, regarding the categorization of dermoscopy pictures
of BCC, MEL, SCC, and MN. Additionally, the DSCC_Net model used without SMOTE
Tomek technique achieved an accuracy of 83.20%. On the other hand, the Vgg-16 model
attained an accuracy of 91.12%. Similarly, the Vgg-19 and MobileNet models achieved an
accuracy of 91.68% and 95.51%, respectively. The ResNet-152 model’s performance was
poor in skin cancer classification as compared to all baseline models. Furthermore, we
also provide the GRAD-CAM evaluation of the proposed DSCC_Net model for skin cancer
disease classification as shown in Figure 15.
Table 6 presents the classification performance of the proposed DSCC_Net model with
SOTA classifiers. Zhou et al. [70] proposed a DL model that achieved a classification accu-
racy of 0.92. Qasim et al. [71] designed a novel model, Vgg-13, for skin cancer identification.
They achieved an accuracy of skin cancer detection of 89.57%. A ConvNet net model that
focuses on the binary categorization of skin diseases was provided by Mijwil et al. [73].
This model was based on Inception-V3. By using this model, benign and malignant forms
of skin cancer are distinguished. The multiclassification of skin lesions was performed by
Afza et al. [74], by using 2D superpixels with ResNet-50, and they reached an accuracy of
85.50%. In addition, Khan et al. [75] attained a precision of 88.50% when performing the
multiclassification of skin cancer. When compared to other approaches that are considered
to be SOTA, the DSCC_Net model obtained an impressive accuracy of 94.71%.
5. Conclusions
In this study, the proposed DSCC_Net model, used for identifying the four forms
of skin cancer (BCC, MEL, SCC, and MN), was developed and evaluated. Today, these
skin cancer diseases are rapidly spreading and affect communities globally. Many deaths
have occurred because of improper and slow testing procedures, limited facilities, and the
lack of diagnosis of skin cancer at an early stage. Due to a large number of cases, a rapid
and effective testing procedure is necessary. We proposed a DSCC_Net model to identify
the four types of skin cancer diseases. Each convolutional block of the modified structure
was generated using multiple layers and was applied in order to classify early-stage skin
cancers. The SMOTE Tomek algorithm was used to generate samples that were used to
solve dataset imbalance problems and to maintain a balance in the number of samples for
each class. Grad-CAM displays a heat map of class activation to illustrate the operation of
the CNN layer. Our proposed DSCC_Net model achieved 94.17% accuracy, 93.76% recall,
93.93% F1-score, 94.28% precision, and 99.42% AUC. So, it is concluded that DSCC_Net
model can play a significant role as a supporting hand for the medical professional. The
limitation of the study is that our proposed DSCC_Net model is suitable for only fair-
skinned individuals. Individuals with dark skin were not considered in this study. The
reason is that the publicly available datasets used in this work contain skin cancer images
of fair-toned skin. In the future, we will combine blockchain and federated learning with a
Cancers 2023, 15, 2179 25 of 28
deep attention module to obtain more favorable results in classifying skin cancer, as well as
skin infections.
Author Contributions: Conceptualization, M.T., A.N. and H.M.; methodology, M.T., A.N. and H.M;
validation, R.A.N., J.T., S.-W.L. and H.M.; formal analysis, A.N. and S.-W.L.; investigation, A.N. and
R.A.N.; resources, A.N., J.T. and H.M; data curation, R.A.N.; writing—original draft preparation,
A.N., H.M; writing—review and editing, A.N., H.M. and R.A.N.; visualization, J.T., H.M; supervision,
H.M., R.A.N. and S.-W.L.; funding acquisition, S.-W.L. All authors have read and agreed to the
published version of the manuscript.
Funding: This work was supported by a national research foundation (NRF) grant funded by the
Ministry of Science and ICT (MSIT), South Korea through the Development Research Program
(NRF2021R1I1A2059735 and NRF2022R1G1A1010226).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. World Health Organization Radiation: Ultraviolet (UV) Radiation and Skin Cancer|How Common Is Skin Cancer. Avail-
able online: https://www.who.int/news-room/q-a-detail/radiation-ultraviolet-(uv)-radiation-and-skin-cancer# (accessed on
2 March 2023).
2. Piccialli, F.; Di Somma, V.; Giampaolo, F.; Cuomo, S.; Fortino, G. A survey on deep learning in medicine: Why, how and when?
Inf. Fusion 2021, 66, 111–137. [CrossRef]
3. Navid, R.; Ashourian, M.; Karimifard, M.; Estrela, V.V.; Loschi, H.J.; Nascimento, D.D.; França, R.P.; Vishnevski, M. Computer-
aided diagnosis of skin cancer: A review. Curr. Med. Imaging 2020, 16, 781–793.
4. Ahmad, N.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant melanoma classification using deep learning: Datasets, performance
measurements, challenges and opportunities. IEEE Access 2020, 8, 110575–110597.
5. O’Sullivan, D.E.; Brenner, D.R.; Demers, P.A.; Villeneuve, P.J.; Friedenreich, C.M.; King, W.D. Indoor tanning and skin cancer in
Canada: A meta-analysis and attributable burden estimation. Cancer Epidemiol. 2019, 59, 1–7. [CrossRef] [PubMed]
6. Walters-Davies, R. Skin cancer: Types, diagnosis and prevention. Evaluation 2020, 14, 34.
7. Hodis, E. The Somatic Genetics of Human Melanoma. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 2018.
8. Nathan, N.; Hubbard, M.; Nordmann, T.; Sperduto, P.W.; Clark, H.B.; Hunt, M.A. Effect of gamma knife radiosurgery and
programmed cell death 1 receptor antagonists on metastatic melanoma. Cureus 2017, 9, e1943.
9. Ahmad, N.; Anees, T.; Naqvi, R.A.; Loh, W.-K. A comprehensive analysis of recent deep and federated-learning-based method-
ologies for brain tumor diagnosis. J. Pers. Med. 2022, 12, 275.
10. Rogers, H.W.; Weinstock, M.A.S.; Feldman, R.; Coldiron, B.M. Incidence estimate of non-melanoma skin cancer (keratinocyte
carcinomas) in the US population 2012. JAMA Dermatol. 2015, 151, 1081–1086. [CrossRef] [PubMed]
11. Bomm, L.; Benez, M.D.V.; Maceira, J.M.P.; Succi, I.C.B.; Scotelaro, M.D.F.G. Biopsy guided by dermoscopy in cutaneous pigmented
lesion-case report. An. Bras. Dermatol. 2013, 88, 125–127. [CrossRef] [PubMed]
12. Kato, J.; Horimoto, K.; Sato, S.; Minowa, T.; Uhara, H. Dermoscopy of melanoma and non-melanoma skin cancers. Front. Med.
2019, 6, 180. [CrossRef]
13. Haenssle, H.A.; Fink, C.; Schneiderbauer, R.; Toberer, F.; Buhl, T.; Blum, A.; Kalloo, A.; Hadj, H.A.B.; Thomas, L.; Enk, A.; et al.
Reader study level-I and level-II Groups, Man against machine: Diagnostic performance of a deep learning convolutional neural
network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann. Oncol. 2018, 29, 1836–1842. [CrossRef]
14. Ibrahim, H.; El-Taieb, M.; Ahmed, A.; Hamada, R.; Nada, E. Dermoscopy versus skin biopsy in diagnosis of suspicious skin
lesions. Al-Azhar Assiut Med. J. 2017, 15, 203. [CrossRef]
15. Duggani, K.; Nath, M.K. A technical review report on deep learning approach for skin cancer detection and segmentation. Data
Anal. Manag. Proc. ICDAM 2021, 54, 87–99.
16. Carli, P.; Quercioli, E.; Sestini, S.; Stante, M.; Ricci, L.; Brunasso, G.; DeGiorgi, V. Patternanalysis, notsimplifiedalgorithms, isthe
most reliable method for teaching dermoscopy for melanoma diagnosis to residents in dermatology. Br. J. Dermatol. 2003, 148,
981–984. [CrossRef] [PubMed]
17. Carrera, C.; Marchetti, M.A.; Dusza, S.W.; Argenziano, G.; Braun, R.P.; Halpern, A.C.; Jaimes, N.; Kittler, H.J.; Malvehy, J.; Menzies,
S.W.; et al. Validity and reliability of dermoscopic criteria used to differentiate nevi from melanoma: A web-based international
dermoscopy society study. JAMA Dermatol. 2016, 152, 798–806. [CrossRef] [PubMed]
18. Celebi, M.E.; Kingravi, H.A.; Uddin, B.; Iyatomi, H.; Aslandogan, Y.A.; Stoecker, W.V.; Moss, R.H. A methodological approach to
the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007, 31, 362–373. [CrossRef]
Cancers 2023, 15, 2179 26 of 28
19. Maglogiannis, I.; Doukas, C.N. Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf.
Technol. Biomed. 2009, 13, 721–733. [CrossRef]
20. Celebi, M.E.; Iyatomi, H.; Stoecker, W.V.; Moss, R.H.; Rabinovitz, H.S.; Argenziano, G.; Soyer, H.P. Automatic detection of
blue-white veil and related structures in dermoscopy images. Comput. Med. Imaging Graph. 2008, 32, 670–677. [CrossRef]
[PubMed]
21. Hassaan, M.; Anees, T.; Din, M.; Ahmad, N. CDC_Net: Multi-classification convolutional neural network model for detection
of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. Multimed. Tools Appl. 2022, 82,
13855–13880.
22. Lu, S.; Lu, Z.; Zhang, Y.D. Pathological brain detection based on AlexNet and transfer learning. J. Comput. Sci. 2019, 30, 41–47.
[CrossRef]
23. Ahmad, N.; Anees, T.; Ahmed, K.T.; Naqvi, R.A.; Ahmad, S.; Whangbo, T. Deep learned vectors’ formation using auto-correlation,
scaling, and derivations with CNN for complex and huge image retrieval. Complex Intell. Syst. 2022, 4, 1–23.
24. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with
extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [CrossRef]
25. Alom, M.Z.; Aspiras, T.; Taha, T.M.; Asari, V.K. Skin cancer segmentation and classification with improved deep convolutional
neural network. In Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications; International Society for
Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11318, p. 1131814.
26. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer
with deep neural networks. Nature 2017, 542, 115–118. [CrossRef]
27. Polat, K.; Koc, K.O. Detection of skin diseases from dermoscopy image using the combination of convolutional neural network
and one-versus-all. J. Artif. Intell. Syst. 2020, 2, 80–97. [CrossRef]
28. Ratul, M.A.R.; Mozaffari, M.H.; Lee, W.S.; Parimbelli, E. Skin lesions classification using deep learning based on dilated
convolution. BioRxiv 2020, 860700. [CrossRef]
29. Ranpreet, K.; GholamHosseini, H.; Sinha, R.; Lindén, M. Melanoma classification using a novel deep convolutional neural
network with dermoscopic images. Sensors 2022, 22, 1134.
30. Javed, R.; Ishfaq, M.; Ali, G.; Saeed, M.R.; Hussain, M.; Alkhalifah, T.; Alturise, F.; Samand, N. Skin cancer disease detection using
transfer learning technique. Appl. Sci. 2022, 12, 5714.
31. Shahin, A.; Miah, S.; Haque, J.; Rahman, M.; Islam, K. An enhanced technique of skin cancer classification using deep convolutional
neural network with transfer learning models. Mach. Learn. Appl. 2021, 5, 100036.
32. Tanzila, S.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region extraction and classification of skin cancer: A heterogeneous
framework of deep CNN features fusion and reduction. J. Med. Syst. 2019, 43, 289.
33. Hiam, A.; Qasmieh, I.A.; Alqudah, A.M.; Alhammouri, S.; Alawneh, E.; Abughazaleh, A.; Hasayen, F. The melanoma skin
cancer detection and classification using support vector machine. In Proceedings of the 2017 IEEE Jordan Conference on Applied
Electrical Engineering and Computing Technologies (AEECT), Amman, Jordania, 11–13 October 2017; pp. 1–5.
34. Hardik, N.; Singh, S.P. Deep learning solutions for skin cancer detection and diagnosis. Mach. Learn. Health Care Perspect. Mach.
Learn. Healthc. 2020, 13, 159–182.
35. Duggani, K.; Venugopal, V.; Nath, M.K.; Mishra, M. Hybrid convolutional neural networks with SVM classifier for classification
of skin cancer. Biomed. Eng. Adv. 2023, 5, 100069.
36. Gilani, Q.; Syed, S.T.; Umair, M.; Marques, O. Skin Cancer Classification Using Deep Spiking Neural Network. J. Digit. Imaging
2023, 1–11.
37. Ioannis, K.; Perikos, I.; Hatzilygeroudis, I.; Virvou, M. Deep learning methods for accurate skin cancer recognition and mobile
application. Electronics 2022, 11, 1294.
38. Ghadah, A.; Gouda, W.; Humayun, M.; Sama, N.U. Melanoma Detection Using Deep Learning-Based Classifications. Healthcare
2022, 10, 2481.
39. Khalil, A.; Turki, T. Automatic Classification of Melanoma Skin Cancer with Deep Convolutional Neural Networks. AI 2022, 3,
512–525.
40. Karar, A.; Shaikh, Z.A.; Khan, A.A.; Laghari, A.A. Multiclass skin cancer classification using EfficientNets—A first step towards
preventing skin cancer. Neurosci. Inform. 2022, 2, 100034. [CrossRef]
41. Naseer, B.M.; Muta, K.; Malik, M.I.; Siddiqui, S.A.; Braun, S.A.; Homey, B.; Dengel, A.; Ahmed, S. Computer-aided diagnosis of
skin diseases using deep neural networks. Appl. Sci. 2020, 10, 2488.
42. Adi, N.A.; Slamet, I.S. Skins cancer identification system of HAMl0000 skin cancer dataset using convolutional neural network.
AIP Conf. Proc. 2019, 2202, 020039.
43. Moldovan, D. Transfer learning based method for two-step skin cancer images classification. In Proceedings of the 2019 E-Health
and Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2019; pp. 1–4.
44. Le Duyen, N.T.; Hieu, X.L.; Lua, T.N.; Hoan, T.N. Transfer learning with class-weighted and focal loss function for automatic skin
cancer classification. arXiv 2020, arXiv:2009.05977.
45. Saksham, B.; Gomekar, A. Deep learning diagnosis of pigmented skin lesions. In Proceedings of the 2019 10th International
Conference on Computing, Communication and Networking Technologies (ICCCNT), Kanpur, India, 6–8 July 2019; pp. 1–6.
Cancers 2023, 15, 2179 27 of 28
46. Emrah, Ç.; Zengin, K. Classification of skin lesions in dermatoscopic images with deep convolution network. Avrupa Bilim Ve
Teknol. Derg. 2019, 6, 309–318.
47. Hasan, M.; Barman, S.D.; Islam, S.; Reza, A.W. Skin cancer detection using convolutional neural network. In Proceedings of the
2019 5th International Conference on Computing and Artificial Intelligence, Bali, Indonesia, 19–22 April 2019; pp. 254–258.
48. Tomáš, M.; Bajić, B.; Yildirim, S.; Hardeberg, J.Y.; Lindblad, J.; Sladoje, N. Ensemble of convolutional neural networks for
dermoscopic images classification. arXiv 2018, arXiv:1808.05071.
49. Lopez; Romero, A.; Giro-i-Nieto, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep
learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed),
Innsbruck, Austria, 20–21 February 2017; pp. 49–54.
50. Jeremy, K.; BenTaieb, A.; Hamarneh, G. Deep features to classify skin lesions. In Proceedings of the 2016 IEEE 13th International
Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 1397–1400.
51. Noel, C.; Cai, J.; Abedini, M.; Garnavi, R.; Halpern, A.; Smith, J.R. Deep learning, sparse coding, and SVM for melanoma
recognition in dermoscopy images. In Proceedings of the Machine Learning in Medical Imaging: 6th International Workshop,
MLMI 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, 5 October 2015; pp. 118–126.
52. Chadaga, K.; Prabhu, S.; Sampathila, N.; Chadaga, R.; Sengupta, S. Predicting cervical cancer biopsy results using demographic
and epidemiological parameters: A custom stacked ensemble machine learning approach. Cogent Eng. 2022, 9, 2143040. [CrossRef]
53. Sampathila, N.; Chadaga, K.; Goswami, N.; Chadaga, R.P.; Pandya, M.; Prabhu, S.; Bairy, M.G.; Katta, S.S.; Bhat, D.; Upadya, S.P.
Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images. Healthcare
2022, 10, 1812. [CrossRef] [PubMed]
54. Krishnadas, P.; Chadaga, K.; Sampathila, N.; Rao, S.; Prabhu, S. Classification of Malaria Using Object Detection Models.
Informatics 2022, 9, 76. [CrossRef]
55. Acharya, V.; Dhiman, G.; Prakasha, K.; Bahadur, P.; Choraria, A.; Prabhu, S.; Chadaga, K.; Viriyasitavat, W.; Kautish, S. AI-assisted
tuberculosis detection and classification from chest X-rays using a deep learning normalization-free network model. Comput.
Intell. Neurosci. 2022, 2022, 2399428. [CrossRef]
56. Khanna, V.V.; Chadaga, K.; Sampathila, N.; Prabhu, S.; Bhandage, V.; Hegde, G.K. A Distinctive Explainable Machine Learning
Framework for Detection of Polycystic Ovary Syndrome. Appl. Syst. Innov. 2023, 6, 32. [CrossRef]
57. Imran, I.; Younus, M.; Walayat, K.; Kakar, M.U.; Ma, J. Automated multi-class classification of skin lesions through deep
convolutional neural network with dermoscopic images. Comput. Med. Imaging Graph. 2021, 88, 101843.
58. WHO. Gastrointestinal Cancer. 2020. Available online: https://www.who.int/news-room/fact-sheets/detail/cancer (accessed
on 2 March 2023).
59. Yogapriya, J.; Venkatesan Chandran, M.G.; Sumithra, P.; Anitha, P.; Jenopaul, C.; Dhas, S.G. Gastrointestinal tract disease
classification from wireless endoscopy images using pretrained deep learning model. Comput. Math. Methods Med. 2021, 2021,
5940433. [CrossRef] [PubMed]
60. Laith, A.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Santamaría, J.; Duan, Y.; Oleiwi, S.R. Towards a better understanding of
transfer learning for medical imaging: A case study. Appl. Sci. 2020, 10, 4523.
61. Yixuan, Y.; Li, B.; Meng, M.Q.-H. Bleeding frame and region detection in the wireless capsule endoscopy video. IEEE J. Biomed.
Health Inform. 2015, 20, 624–630.
62. Naveen, S.; Zverev, V.I.; Keller, H.; Pane, S.; Egolf, P.W.; Nelson, B.J.; Tishin, A.M. Magnetically guided capsule endoscopy. Med.
Phys. 2017, 44, e91–e111.
63. Benjamin, J.S.; Ferdinand, J.R.; Clatworthy, M.R. Using single-cell technologies to map the human immune system—Implications
for nephrology. Nat. Rev. Nephrol. 2020, 16, 112–128.
64. Hui, H.; Wang, W.-Y.; Mao, B.-H. Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning. In
Proceedings of the Advances in Intelligent Computing: International Conference on Intelligent Computing, ICIC 2005, Hefei,
China, 23–26 August 2005; pp. 878–887.
65. Vasileios, C.; Tsiligiri, A.; Hadjileontiadis, L.J.; Liatsos, C.N.; Mavrogiannis, C.C.; Sergiadis, G.D. Ulcer detection in wireless
capsule endoscopy images using bidimensional nonlinear analysis. In Proceedings of the XII Mediterranean Conference on
Medical and Biological Engineering and Computing 2010, Chalkidiki, Greece, 27–30 May 2010; pp. 236–239.
66. Ayidzoe, A.; Mighty; Yu, Y.; Mensah, P.K.; Cai, J.; Adu, K.; Tang, Y. Gabor capsule network with preprocessing blocks for the
recognition of complex images. Mach. Vis. Appl. 2021, 32, 91. [CrossRef]
67. Mohapatra, S.; Nayak, J.; Mishra, M.; Pati, G.K.; Naik, B.; Swarnkar, T. Wavelet transform and deep convolutional neural
network-based smart healthcare system for gastrointestinal disease detection. Interdiscip. Sci. Comput. Life Sci. 2021, 13, 212–228.
[CrossRef] [PubMed]
68. The ISIC 2020 Challenge Dataset. Available online: https://challenge2020.isic-archive.com/ (accessed on 2 March 2023).
69. Philipp, T.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common
pigmented skin lesions. Sci. Data 2018, 5, 180161.
70. Dermtology Information System. Available online: http://www.dermis.net (accessed on 2 March 2023).
71. Sen, W.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. Deep convolutional neural network for ulcer recognition in wireless capsule
endoscopy: Experimental feasibility and optimization. Comput. Math. Methods Med. 2019, 2019, 7546215.
Cancers 2023, 15, 2179 28 of 28
72. Nature. Olympus. The Endocapsule 10 System. Olympus Homepage. 2021. Available online: https://www.olympus-europa.
com/medical/en/Products--and--Solutions/Products/Product/ENDOCAPSULE-10-System.html (accessed on 2 March 2023).
73. Fushuan, W.; David, A.K. A genetic algorithm based method for bidding strategy coordination in energy and spinning reserve
markets. Artif. Intell. Eng. 2001, 15, 71–79.
74. Hassaan, M.; Farooq, M.S.; Khelifi, A.; Abid, A.; Qureshi, J.N.; Hussain, M. A comparison of transfer learning performance versus
health experts in disease diagnosis from medical imaging. IEEE Access 2020, 8, 139367–139386.
75. Ling, W.; Wang, X.; Fu, J.; Zhen, L. A Novel Probability Binary Particle Swarm Optimization Algorithm and its Application.
J. Softw. 2008, 3, 28–35.
76. Yufei, Z.; Koyuncu, C.; Lu, C.; Grobholz, R.; Katz, I.; Madabhushi, A.; Janowczyk, A. Multi-site cross-organ calibrated deep
learning (MuSClD): Automated diagnosis of non-melanoma skin cancer. Med. Image Anal. 2023, 84, 102702.
77. Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An Efficient Deep
Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics 2022, 12, 2115. [CrossRef] [PubMed]
78. Manash, E.B.K.; Suhasini, A.; Satyala, N. Intelligent skin cancer diagnosis using adaptive k-means segmentation and deep
learning models. Concurr. Comput. Pract. Exp. 2023, 35, e7546.
79. Mijwil, M.M. Skin cancer disease images classification using deep learning solutions. Multimed. Tools Appl. 2021, 80, 26255–26271.
[CrossRef]
80. Farhat, A.; Sharif, M.; Mittal, M.; Khan, M.A.; Hemanth, D.J. A hierarchical three-step superpixels and deep learning framework
for skin lesion classification. Methods 2022, 202, 88–102.
81. Khan, A.M.; Akram, T.; Zhang, Y.-D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and
transfer learning-based deep learning framework. Pattern Recognit. Lett. 2021, 143, 58–66. [CrossRef]
82. Naeem, A.; Anees, T.; Fiza, M.; Naqvi, R.A.; Lee, S.-W. SCDNet: A Deep Learning-Based Framework for the Multiclassification of
Skin Cancer Using Dermoscopy Images. Sensors 2022, 22, 5652. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.