10
10
10
This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3285821
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.Doi Number
ABSTRACT Lung cancer is the most common cause of cancer deaths around the globe. Early detection is crucial for successful
treatment and increasing patient survival rates. Artificial intelligence techniques can play a significant role in the initial diagnosis
of lung cancer. Various methods consisted of machine learning and deep learning methods are used to detect lung cancer. This
research works aims to develop automated methods to accurately identify and classify lung cancer in CT scans by using
computational intelligence techniques. The process typically involves lobe segmentation, extracting candidate nodules, and
classifying nodules as either cancer or non-cancer. The proposed lung cancer classification uses modified U-Net based lobe
segmentation and nodule detection model consisting of three phases. The first phase segments lobe using CT slice and predicted
mask using modified U-Net architecture and the second phase extracts candidate nodule using predicted mask and label employing
modified U-Net architecture. Finally, the third phase is based on modified AlexNet, and a support vector machine is applied to
classify candidate nodules as cancer and non-cancer. The experimental outcomes of the proposed methodology for lobe
segmentation, candidate nodule extraction, and classification of lung cancer have shown promising results on the publicly available
LUAN16 dataset. The modified AlexNet-SVM classification model achieves 97.98% of accuracy, 98.84% of sensitivity, 97.47%
of specificity, 97.53% of precision, and 97.70% of F1 for the classification of lung cancer.
INDEX TERMS AlexNet, nodule extraction, lung cancer, segmentation, support vector machine, U-Net.
[25], dual-path lung nodules segmentation consisted of boundary uncertainty at the pulmonary nodule boundary is considered a
enhancement and hybrid transformer (DPBET)[26], DAS-NET challenge and to overcome this challenge the authors presented
[27], Lung PAYNet [28], LungNet-SVM [29] to improve the Uncertainty Analysis Based Attention UNet (UAA-UNet)
segmentation task in medical images The mentioned networks model. The proposed network deals with uncertainty in edge
apply benchmark U-Net architecture and obtained different level regions and it contains two stages. In the first stage, initial
of accuracy but still, there is a need to enhance the accuracy of segmentation maps of pulmonary nodules were found, and
the segmentation process. uncertainty regions are focused on in the second step. A UAA
Image segmentation divides an image into different image UNet model has achieved a sensitivity of 85.11% and Dice of
objects and boundaries. Medical image segmentation plays a 86.89% for nodule segmentation. Wang et al. [37] have designed
decisive role in the diagnoses of several diseases through deep a selective kernel V-Net architecture for the extraction of multi-
learning methods. Automated segmentation methods based on scale feature information and improved lung nodule
CT and MRI have increased in demand [30]. Deep learning segmentation performance with Dice of 0.796, Jaccard of 0.665,
networks mostly used encoder-decoder architectures and deep and 0.789 sensitivity.
generative models for medical image segmentation. The U-Net- He et al. [38] presented an ISHAP (Improved SHapley Additive
based model crops the feature maps from the encoding exPlanations)-based model to classify lung nodules. Medical
component, copy them to the decoding component, and for prior knowledge was used to extract semantic and radiomics
segmentation map generation [31]. Pulmonary cancer nodules features. ISHAP explanation and recursive feature elimination
are detected by various researchers using different segmentation algorithm were applied to guide important features and classifiers
methods. Deep learning-based CAD solutions can decrease the with parameters. Then, the ISHAP-based model utilized to
burden of medical experts to detect various diseases particularly classify pulmonary nodules into cancer and non-cancer on the
lobe segmentation, nodule detection, and classification of lung LIDC dataset obtained 0.873, 0.885 & 0.862 in accuracy,
cancer nodules. This research presents an automatic deep specificity, and sensitivity respectively. Huidrom et al. [39]
learning-based model that segments, detects, and classifies lung focused neuro-evolutional approach containing a feed-forward
nodules increases the accuracy rate, and reduces false positives neural network to detect pulmonary nodules. This technique
while detecting lung nodules. Eventually, lung cancer detection worked with particle swarm optimization and cuckoo search
at an initial stage will reduce the mortality rate. algorithm and yielded 95.5% accuracy and 95.8% of sensitivity.
Similarly, another research presented by [40] to detect lung
II. LITERATURE REVIEW cancer detection based on CNN and generative adversarial
Pulmonary nodule detection is a crucial task and early detection networks (GANs).
of pulmonary cancer is needed to reduce the mortality rate and Li et al. [41] used handcrafted features followed by the
appropriate treatment. Various computational techniques are convolutional neural network. Nageswaran et al. [42] presented
used to detect lung cancer and several research methods have a lung cancer classification technique using various machine
been reported in the literature. Therefore, we have analyzed the learning (ML) methods like artificial neural network (ANN), K-
techniques below including segmentation, classification, and nearest neighbors (KNN), and random forest. Lung nodule
detection of pulmonary cancerous nodules. classification was performed by Zhao et al. [43] which consisted
In the field of medical imaging, deep convolutional neural of an attentive module that scratches spatial and global
networks (DCNN) made fabulous achievements. Long et al. [32] information. Furthermore, multilevel contextual information
presented an end-to-end network based on a fully convolutional encoded by the adaptive conv-kernels method improved nodule
network which is more accurate for image segmentation. classification accuracy. Bhaskar et al. [44] introduced an
Ronneberger et al. [33] introduced U-Net architecture consisting effective method using multi-scale Laplacian of Gaussian filters
of encoder-decoder and skip connection used to retain important and DCNN to detect pulmonary nodules and achieved 71.2%
information from the different sizes of feature maps and attained recall, and 93.2% accuracy.
remarkable achievement in medical image segmentation tasks. Han et al. [45] detected and classified lung nodules by applying
Singadkar et al. [34] used a deep deconvolutional residual a 3D ResNet algorithm and a fully connected neural network to
network (DDRN) in the 2D CT lung images for automatic lung reduce the medical expert's workload on the LUNA16 dataset.
nodule segmentation and this model was end-to-end trained with Similarly, Bruntha et al. [46] used ResNet50 and a handcrafted
fully captured resolution features. Fu et al. [35] presented a multi- histogram of oriented gradient (HOG) for deep feature extraction
task learning model consisting of a convolutional neural network and handcrafted feature respectively. A support vector machine
(CNN) to segment 2D CT images. Their model used an arbitrary (SVM) was utilized to classify non-cancer and cancer nodules for
depth technique on entire nodule volumes and a slice attention this proposed hybridized model on the LIDC dataset. Al-Shabi et
module applied to drop irrelevant slices. Moreover, attribute and al. [47] introduced a model for lung nodule classification namely
cross-attribute modules represented meaningful relationships Progressive Growing Channel Attentive Non-Local (ProCAN)
between attributes. et al. [28] suggested an inverted residual network reached an accuracy of 95.28%. Huang et al. [48]
block used by the encoder and decoder to segment lung nodules. introduced an effective model based on a deep feature
In their proposed Lung PAYNet architecture, they applied a optimization framework (DFOF) for lung cancer classification.
pyramid attention network to acquire dense features from the The model yielded 92.13% accuracy and 87.16%. recall and
encoder and decoder. According to Liang et al. [36] segmentation 94.16% precision.
Mahmood et al. [49] introduced an automatic CAD system based fuzzy c-means clustering (EFCM) and SVM presented for
on AlexNet architecture to classify lung nodules. The proposed segmentation and detection of nodules respectively. Lye et al [51]
AlexNet architecture was tuned with several layers and suggested a model consisted of a multi-level cross-residual
hyperparameters to achieve superior performance. The model network (ML-xResNet) classify the pulmonary nodules and
achieved results of the pulmonary cancer screening trial were obtained 92.19% of accuracy. The foremost limitations of prior
98.9% of specificity and 98.7% of accuracy. Another research studies are explained in Table I.
work by Dodia et al. [50], presented an elagha initialization-based
TABLE 1
LIMITATION OF PREVIOUS WORK
Publications Year Dataset Methods Accuracy (%) Limitation
Halder et al. [15] 2022 LIDC-IDRI 2- Pathway Morphology- 96.10 Lack of Transparency
based Convolutional
Neural Network
(2PMorphCNN)
Fu et al. [35] 2022 LIDC-IDRI convolutional neural 94.7 Need to improve accuracy
network (CNN)-based
MTL model
Huidrom et al. [39] 2022 LUNA16 Neuro-evolutional 95.5 Handcrafted features
approach
Suresh et al. [40] 2020 LIDC-IDRI CNN and generative 93.9 Need to improve accuracy
adversarial networks
(GANs)
Li et al. [41] 2019 LIDC-IDRI Handcrafted features and 93.07 Handcrafted features
CNN-based algorithm
Bhaskar et al. [44] 2022 LUNA16 Multi-scale Laplacian of 93.2 Need to improve accuracy
Gaussian filters and Deep
CNN
Han et al. [45] 2022 LUNA16 3D ResNet 91.1 Need to improve accuracy
Bruntha et al. [46] 2022 LIDC-IDRC Hybridized Feature 97.53 Handcrafted features
Extraction Approach
Al-Shabi et al. [47] 2022 LIDC-IDRI Progressive Growing 94.11 Need to improve accuracy
Channel Attentive Non-
Local (ProCAN) network
Huang et al. [48] 2022 LIDC-IDRI Deep feature optimization 92.13 Need to improve accuracy
framework (DFOF)
Dodia et al. [50] 2022 LUNA16 Elagha initialization-based 94.87 Need to improve accuracy
Fuzzy C-Means clustering Limited generalizability to
(EFCM) other datasets
Lye et al. [51] 2020 LIDC-IDRI multi-level cross residual 92.19 Need to improve accuracy
convolutional neural
network (ML-xResNet)
Table I is showing some limitations of the previous studies results and is investigated using various performance
including hand-crafted features [39], [41] ,[46], the need to statistical indicators.
improve accuracy [35],[40],[44], [45],[47],[48],[51], lack of ▪ Finally, the lung cancer classification model using AlexNet
transparency [15]. and SVM for the classification is proposed and it classifies the
lung nodule into cancer and non-cancer. The suggested model
The core contribution of this research work is following: achieves better results for accurate and effective lung cancer
▪ The main objective of the present research is to provide the classification and treatment.
lung cancer classification method using modified U-Net- The rest of the paper is formed as follows; Section 2 includes the
based lobe segmentation and nodule detection. literature review, Section 3 illustrates the proposed methodology,
▪ To enhance the efficiency of the segmentation model, we and the results and discussion wrap up in Section 4. Section 5
have employed modified U-Net architecture for lobe concludes with a conclusion and Section 6 describes limitations
segmentation and ensure that lobe-segmentation model and future work.
training, validation, and testing are brought out efficiently.
▪ The performance of the recommended candidate nodule
extraction model has been used by modified U-Net
architecture for the detection of a nodule and it provides better
The model comprises of three phases: Lobe segmentation, In the second step, the modified U-Net-based model for lobe
candidate nodule extraction, and lung cancer classification. In the segmentation predicts the mask from the test CT scans dataset
lobe segmentation phase, modified U-Net architecture is applied and by using the predicted mask, the lobe is extracted. In this
to segment the input CT scans, and lobes are derived as output. phase, the LUNA16 dataset consists of CT scans with labels that
Whereas the next candidate nodule extraction phase uses are utilized as input to the suggested method for segmentation.
predicted lobes as input and a modified U-Net-based model is The lung cancer dataset consists of 888 CT scans which are
applied for the extraction of the candidate nodule. Furthermore, divided into 589 for cancer and 299 indicates non-cancer. In this
a modified AlexNet-SVM-based model is applied on patches of research, a total number of 30 cancer CT scans are separated for
candidate nodules in the third phase and classifies candidate testing the proposed method. A total number of 858 cancer and
nodules as non-cancer and cancer. non-cancer CT scans is separated for training and validation, and
it is further divided into 80% (686 CT scans) for training and 20%
3.1. Lobe Segmentation Phase (172 CT scans) for validation of the model for lobe segmentation.
The lobe segmentation model is trained on the 686 CT scans
Lobe segmentation is the first phase of lung cancer classification training dataset. After the training of the seg-lobe model, it is
using a modified U-Net-based lobe segmentation and nodule validated on 172 CT scans. A total number of 30 CT scans are
detection model as shown in Figure 2. In this phase, modified U- provided to the seg-lobe model for testing for the segmentation
Net architecture is applied to the segment lobe from CT scan of CT scans. The seg-lobe model predicts the masks from the
images. testing 30 CT scans. Finally, the lobe from slices of 30 CT scans
The segmentation phase consists of two steps: seg-lobe training is segmented using the predicted mask of slices. The U-Net
and validation step and lobe segmentation step. In the seg-lobe architecture was designed by Ronneberger et al. [33] for medical
step, a modified U-Net-based model was trained and validated on image segmentation in 2015. U-Net architecture consisted of
LUNA16 CT scans dataset. three main blocks, encoder, decoder, and skip connection as
illustrated in Figure 3.
.
Figure 2. Proposed Modified U-Net Architecture for Lobe Segmentation
The Encoder block receives the image as input and then extracts where α denotes the input, ω' shows the layer’s transposed
useful features from an image using multiple convolutional weight matrix and ♭ represents the bias parameter and expresses
layers. Decoder block U-Net architecture is a combination of the activation function.
several convolutional layers and transposed convolutional layers. U-Net architecture also comprises concatenation operations.
The convolutional layer represents in Eq. (1) and transposed Where feature maps from contracting combine with feature maps
convolutional layer represents in Eq. (2). from the expanding paths. The mathematical representation of
concatenation operations is shown in Eq. (3).
𝜑 = 𝑓(𝜔 ∗ 𝛼 + ♭) (1)
𝜑 = 𝑐𝑜𝑛𝑐𝑎𝑡𝑒𝑛𝑎𝑡𝑒(𝛼1 + 𝛼2) (3)
where α denotes the input, ω shows the layer’s weight and ♭
represents the bias parameter and expresses the activation where α1 and α2 represent the feature maps.
function. In the first phase of the lung cancer classification using modified
U-Net-based lobe segmentation and nodule detection method, the
𝜑 = 𝑓(𝜔′ ∗ 𝛼 + ♭) (2) image input dimension is 512 × 512 × 1 followed by two
convolutional layers. Convolutional operations are performed on
two convolutional layers with 8 filter sizes, ReLU activation where y represents the input feature map, z denotes the output
function is used 3 × 3 kernel size, and the same padding, the feature map, k is pool size and g, h are the indexes of the output
output 512 × 512 × 8 is denoted by C1. feature map. Max operation is performed in a j × j window of the
The convolutional layer is the primary component of CNN input feature map and the maximum value is assigned to the
architecture where important features are extracted from the input corresponding location in the output feature map. Next, the
data. For this, convolutional operations are performed and sigmoid activation function applies and 512 × 512 × 8 is
denoted by *, the output of the convolutional operations is named forwarded to the sigmoid activation function represented in Eq.
as features map. These operations are represented in Eq. (4). (7).
1
𝑠𝑖𝑔𝑚𝑜𝑖𝑑 (𝐶7) = (7)
(𝑚 ∗ 𝑛)(𝑝) = ∫ 𝑚(𝑢)𝑛(𝑝 − 𝑢)𝑑𝑢 (4) (1 + 𝑒 𝑐7 )
𝑔𝑡
∆𝑗𝑡 = −ℎ𝑡 ∗ × ℎ𝑡 (10)
√𝑚 𝑡 + 𝜀
30 CT scans are used for testing the candidate nodule extraction predicts the candidate nodule by using the predicted mask and
model. The modified U-Net architecture for the candidate nodule label.
extraction model predicts the candidate nodule mask from the
lobes of slices of testing 30 cancer CT scans. Finally, the model
3.3. Lung Cancer Classification Phase of learning rate. The modified AlexNet architecture comprises
Finally, the last phase of the lung cancer classification using eight convolutional and three max pooling layers.
modified U-Net-based lobe segmentation and nodule detection The convolutional layer is responsible for extracting valuable
model classifies cancer or non-cancer using patches from the features from patches and the pooling layer is used to reduce the
candidate nodules. In this research, patch size 48 × 48 is used to size of the feature map but keeps the important information. In
train, validate, and test the modified AlexNet-SVM architecture this research, the max pooling layer is used on the feature map.
for Lung Cancer Classification. A total number of 17006 patches After the max pooling layer, the feature map matrix is
are obtained from slices of 858 cancer and non-cancer CT scan. transformed into a single long vector, and it is called flattening.
It is further divided into 80% (13605) patches for training and The modified AlexNet architecture takes input 48 × 48 × 1
20% (3401) patches for validation of the model. Patches obtained grayscale patch size as demonstrated in Figure 6.
from slices of 30 CT scan is used to test the model and it predicts The first three convolutional layers are used 32 filters along with
lung cancer into non-cancer and cancer. Modified AlexNet-SVM a 3 × 3 filter size, the same padding, and ReLU AF is applied to
architecture for lung cancer classification consists of lung cancer remove non-linearity from the feature map. Next, the max
classification training and validation phase and the lung cancer pooling layer is used 2 × 2 filter size, stride 2 and the resulting
classification phase is shown in Figure 5. In the lung cancer patch size reduces, and the dimension of patches becomes 24 ×
classification training and validation phase, the modified 24 × 32. The sigmoid activation function produces a class score
AlexNet-SVM architecture model is trained and validated on a from the output of a fully connected layer. Finally, SVM is
48 × 48 patch size. Modified AlexNet architecture extracted utilized to classify lung cancer into cancer and non-cancer.
features from input patches to obtain important information.
Stochastic gradient descent (SGD) optimizer is used with
hyperparameters such as 200 epochs, 50 batch size, and 0.0001
Figure 5. Proposed Modified AlexNet Architecture for Classification of Lung Cancer Patches
.
Afterward, training and validation of the modified AlexNet-SVM SVM architecture for lung cancer classification and classify
model, the patches from the 30 testing CT scans are forwarded to patches into cancer and non-cancer.
the model to evaluate the performance of the modified AlexNet-
.
.
Figure 6. Proposed Modified AlexNet Architecture for Classification of Lung Cancer Patches
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3285821
three skip connections. The seg-Lobe model is used to train 80% images. Candidate nodule extraction model using modified U-Net
of the dataset and after the training step, the Seg-Lobe model architecture is trained and validated on training and validating
validates on 20% of the dataset of the CT scan images. In the next datasets. In the next step, the segmented lobe test dataset is
step, CT scans test dataset is provided to Seg-Lobe trained model provided to the candidate nodule extraction trained model to
to predict the mask of CT scan images. Then label of the test dataset predict the mask and then the label of the test dataset is provided to
is provided to segment the lobe and the lobe is extracted by using extract the nodule, and the nodule is extracted by using the
the predicted mask. predicted mask.
The outcomes of performance indicators including Dice, IoU,
sensitivity, and precision are 90.32%, 82.35%, 87.5% and 93.33% Segmented Lobe Predicted Candidate Predicted Candidate
Nodule Mask using Nodule using Vanilla
respectively obtains by modified U-Net architecture for lobe mask Modified U-Net U-Net
prediction and Vanilla U-Net achieves 83.40% of Dice, 72.35% of
IoU, 82.55% of sensitivity and 85.42% of precision.
80% of the dataset and after the training step, the candidate nodule Comparison analysis for candidate nodule extraction using
extraction model validates from 20% of the dataset of the CT scan modified U-Net architecture, Vanilla U-Net, and existing state-of-
the-art approaches illustrated in Table II.
TABLE II.
COMPARISON ANALYSIS OF CANDIDATE NODULE EXTRACTION USING MODIFIED U-NET ARCHITECTURE AND VANILLA U-NET MODEL WITH EXISTING STATE-OF-THE-
ART METHODS
Model Dataset Dice Sensitivity Precision
4.3. Results of Lung Cancer Classification Phase from 858 CT scans are used and divided into 80% for training and
20% for validation purposes. Modified AlexNet architecture
The last phase of lung cancer classification using modified U-Net- consists of eight convolutional layers and three max-pooling layers
based lobe segmentation and nodule detection method consists of followed by two fully connected layers and the SVM classifier has
lung nodule classification. The lung nodule classification phase been applied to the classification of lung cancer. When the
comprises two steps. The first step is called training and validation classification model is trained on 80% of patches and validated on
of the lung cancer classification phase and the second step is called 20% of patches. The trained model is tested on patches obtained
the lung cancer classification phase. In the first step, the patches from 30 CT scans. A confusion matrix has been employed to
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3285821
measure the performance of the modified AlexNet-SVM model to model and it correctly predicts 579 sample patches as non-cancer
classify the lung nodules. A total number of 13604 patches from and wrongly predicts 15 sample patches. In the cancer group, a total
858 cancer and non-cancer CT scans are obtained to train the number of 594 sample patches are used for the prediction of cancer,
modified AlexNet-SVM model shown in Table III. the modified AlexNet-SVM model wrongly predicts 9 sample
TABLE III patches as non-cancer and correctly predicts 585 sample patches as
CONFUSION MATRIX OF MODIFIED ALEXNET-SVM CLASSIFICATION MODEL
(TRAINING)
cancer.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3285821
precision, and 97.70% of F1. The experimental results of lung [14] S. Dlamini, Y. H. Chen, and C. F. Jeffrey Kuo, “Complete fully automatic
detection, segmentation and 3D reconstruction of tumor volume for non-
cancer classification using modified U-Net-based lobe small cell lung cancer using YOLOv4 and region-based active contour
segmentation and nodule detection model have shown outstanding model,” Expert Syst. Appl., vol. 212, no. February, pp. 1–9, 2023, doi:
performance. 10.1016/j.eswa.2022.118661.
[15] A. Halder, S. Chatterjee, and D. Dey, “Adaptive morphology aided 2-
pathway convolutional neural network for lung nodule classification,”
VI. LIMITATIONS AND FUTURE WORK Biomed. Signal Process. Control, vol. 72, no. PB, p. 103347, 2022, doi:
Lung cancer classification using a modified U-Net-based lobe 10.1016/j.bspc.2021.103347.
[16] J. Jiang et al., “Lung cancer shapes commensal bacteria via exosome-like
segmentation and nodule detection model segments candidate nanoparticles,” Nano Today, vol. 44, p. 101451, 2022, doi:
nodules and classifies lung cancer into non-cancer and cancer. The 10.1016/j.nantod.2022.101451.
model based on modified U-Net architecture to segment lobe and [17] L. Rinaldi et al., “HeLLePhant: A phantom mimicking non-small cell lung
cancer for texture analysis in CT images,” Phys. Medica, vol. 97, no.
candidate nodule and modified AlexNet architecture with SVM is February, pp. 13–24, 2022, doi: 10.1016/j.ejmp.2022.03.010.
applied to classify the lung nodules. The current research has some [18] G. Zhang, Z. Yang, and S. Jiang, “Automatic lung tumor segmentation
limitations, for example, it used the LUNA16 dataset to train, from CT images using improved 3D densely connected UNet,” Med. Biol.
validate, and test purposes. The other publicly available dataset can Eng. Comput., vol. 60, no. 11, pp. 3311–3323, 2022, doi: 10.1007/s11517-
022-02667-0.
be implemented to test the performance of the Lung cancer [19] M. Kanipriya, C. Hemalatha, N. Sridevi, S. R. SriVidhya, and S. L. Jany
classification using modified U-Net-based lobe segmentation and Shabu, “An improved capuchin search algorithm optimized hybrid CNN-
nodule detection model. LSTM architecture for malignant lung nodule detection,” Biomed. Signal
Process. Control, vol. 78, no. September, pp. 1–39, 2022, doi:
10.1016/j.bspc.2022.103973.
REFERENCES [20] S. Tyagi and S. N. Talbar, “CSE-GAN: A 3D conditional generative
adversarial network with concurrent squeeze-and-excitation blocks for
lung nodule segmentation,” Comput. Biol. Med., vol. 147, no. February, p.
[1] R.L. Siegel, K.D. Miller, N.S. Wagle, and A. Jemal, Cancer statistics, 105781, 2022, doi: 10.1016/j.compbiomed.2022.105781.
2023. CA: a cancer journal for clinicians, vol. 73, no. 1, pp.17-48, 2023. [21] Y. Ni, Z. Xie, D. Zheng, Y. Yang, and W. Wang, “Two-stage multitask U-
[2] G. Zhang, Z. Yang, L. Gong, S. Jiang, and L. Wang, “Classification of Net construction for pulmonary nodule segmentation and malignancy risk
benign and malignant lung nodules from CT images based on hybrid prediction,” Quant. Imaging Med. Surg., vol. 12, no. 1, pp. 292–309, 2022,
features,” Phys. Med. Biol., vol. 64, no. 12, 2019, doi: 10.1088/1361- doi: 10.21037/qims-21-19.
6560/ab2544. [22] H. Cao et al., “Dual-branch residual network for lung nodule
[3] M. Mubashar, H. Ali, C. Grönlund, and S. Azmat, “R2U++: a multiscale segmentation,” Appl. Soft Comput. J., vol. 86, no. January, pp. 1–8, 2020,
recurrent residual U-Net with dense skip connections for medical image doi: 10.1016/j.asoc.2019.105934.
segmentation,” Neural Comput. Appl., vol. 34, no. 20, pp. 17723–17739, [23] J. Sun, W. Chen, L. Zhang, and X. Yan, “Research on Lung Tumor Cell
2022, doi: 10.1007/s00521-022-07419-7. Segmentation Method Based on Improved UNet Algorithm,” Sci.
[4] J. Soltani-Nabipour, A. Khorshidi, and B. Noorian, “Lung tumor Program., vol. 2022, 2022, doi: 10.1155/2022/6357123.
segmentation using improved region growing algorithm,” Nucl. Eng. [24] J. Yang, B. Wu, L. Li, P. Cao, and O. Zaiane, “MSDS-UNet: A multi-scale
Technol., vol. 52, no. 10, pp. 2313–2319, 2020, doi: deeply supervised 3D U-Net for automatic segmentation of lung tumor in
10.1016/j.net.2020.03.011. CT,” Comput. Med. Imaging Graph., vol. 92, no. September, pp. 1–10,
[5] M. Wang, and |D. Li, “An Automatic Segmentation Method for Lung 2021, doi: 10.1016/j.compmedimag.2021.101957.
Tumor Based on Improved Region Growing Algorithm,” Diagnostics, vol. [25] Z. Zhou, F. Gou, Y. Tan, and J. Wu, “A cascaded multi-stage framework
12, no. 12, pp.2971, 2022. for automatic detection and segmentation of pulmonary nodules in
[6] R. Bellotti et al., “A CAD system for nodule detection in low-dose lung developing countries,” IEEE J. Biomed. Heal. Informatics, vol. 26, no. 11,
CTs based on region growing and a new active contour model,” Med. pp. 5619–5630, 2022, doi: 10.1109/JBHI.2022.3198509.
Phys., vol. 34, no. 12, pp. 4901–4910, 2007, doi: 10.1118/1.2804720. [26] S. Wang, A. Jiang, X. Li, Y. Qiu, M. Li, and F. Li, “DPBET: A dual-path
[7] J. Zhao, M. Dang, Z. Chen, and L. Wan, “DSU-Net: Distraction-Sensitive lung nodules segmentation model based on boundary enhancement and
U-Net for 3D lung tumor segmentation,” Eng. Appl. Artif. Intell., vol. 109, hybrid transformer,” Comput. Biol. Med., vol. 151, no. Pt B, p. 106330,
no. November 2021, p. 104649, 2022, doi: 2022, doi: 10.1016/j.compbiomed.2022.106330.
10.1016/j.engappai.2021.104649. [27] S. Luo et al., “DAS-Net: A lung nodule segmentation method based on
[8] T. Meraj et al., “Lung nodules detection using semantic segmentation and adaptive dual-branch attention and shadow mapping,” Appl. Intell., vol.
classification with optimal features,” Neural Comput. Appl., vol. 33, no. 52, no. 13, pp. 15617–15631, 2022, doi: 10.1007/s10489-021-03038-2.
17, pp. 10737–10750, 2021, doi: 10.1007/s00521-020-04870-2. [28] P. M. Bruntha, S. I. A. Pandian, K. M. Sagayam, S. Bandopadhyay, M.
[9] N. S. Rani, U. Karthik, and S. Ranjith, “Extraction of Gliomas from 3D Pomplun, and H. Dang, “Lung_PAYNet: a pyramidal attention-based deep
MRI Images using Convolution Kernel Processing and Adaptive learning network for lung nodule segmentation,” Sci. Rep., vol. 12, no. 1,
Thresholding,” Procedia Comput. Sci., vol. 167, no. 2019, pp. 273–284, pp. 1–11, 2022, doi: 10.1038/s41598-022-24900-4.
2020, doi: 10.1016/j.procs.2020.03.221. [29] I. Naseer, T. Masood, S. Akram, A. Jaffar, M. Rashid, and M. A. Iqbal,
[10] Y. R. Baby and V. K. Ramayyan Sumathy, “Kernel-based Bayesian “Lung Cancer Detection Using Modified AlexNet Architecture and
clustering of computed tomography images for lung nodule Support Vector Machine,” Comput. Mater. Contin, 2023, doi:
segmentation,” IET Image Process., vol. 14, no. 5, pp. 890–900, 2020, doi: 10.32604/cmc.2023.032927.
10.1049/iet-ipr.2018.5748. [30] Y. Cao et al., “Segmentation of lung cancer-caused metastatic lesions in
[11] F. Calabrese et al., “Morphologic-Molecular Transformation of Oncogene bone scan images using the self-defined model with deep supervision,”
Addicted Non-Small Cell Lung Cancer,” Int. J. Mol. Sci., vol. 23, no. 8, Biomed. Signal Process. Control, vol. 79, no. P1, p. 104068, 2023, doi:
2022, doi: 10.3390/ijms23084164. 10.1016/j.bspc.2022.104068.
[12] Z. Ali, A. Irtaza, and M. Maqsood, “An efficient U-Net framework for lung [31] J. Park et al., “Automatic Lung Cancer Segmentation in [18F]FDG
nodule detection using densely connected dilated convolutions,” J. PET/CT Using a Two-Stage Deep Learning Approach,” Nucl. Med. Mol.
Supercomput., vol. 78, no. 2, pp. 1602–1623, 2022, doi: 10.1007/s11227- Imaging (2010)., 2022, doi: 10.1007/s13139-022-00745-7.
021-03845-x. [32] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for
[13] Y. Lan, N. Xu, X. Ma, and X. Jia, “Segmentation of Pulmonary Nodules semantic segmentation,” In Proceedings of the IEEE Conference on
in Lung CT Images based on Active Contour Model,” Proc. - 2022 14th computer vision and pattern recognition, pp. 3431-3440, 2016.
Int. Conf. Intell. Human-Machine Syst. Cybern. IHMSC 2022, pp. 132– [33] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks
135, 2022, doi: 10.1109/IHMSC55436.2022.00039. for biomedical image segmentation,” in International Conference on
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3285821
Medical image computing and computer-assisted intervention, pp. 234– [44] N. Bhaskar and T. S. Ganashree, “Pulmonary Nodule Detection Using
241, 2015. Laplacian of Gaussian and Deep Convolutional Neural Network,” Smart
[34] G. Singadkar, A. Mahajan, M. Thakur, and S. Talbar, “Deep Innov. Syst. Technol., vol. 282, pp. 633–648, 2022, doi: 10.1007/978-981-
Deconvolutional Residual Network Based Automatic Lung Nodule 16-9669-5_58.
Segmentation,” Journal of digital imaging, vol. 33, no. 2, pp.678-684, [45] Y. Han et al., “Computer Methods and Programs in Biomedicine
2020. Pulmonary nodules detection assistant platform : An effective computer-
[35] X. Fu, L. Bi, A. Kumar, M. Fulham, and J. Kim, “An attention-enhanced aided system for early pulmonary nodules detection in physical
cross-task network to analyze lung nodule attributes in CT images,” examination,” Computer Methods and Programs in Biomedicine, vol. 217,
Pattern Recognition, vol. 126, no. 3, pp. 1–32, 2022. no. 4, pp. 1–35, 2022.
[36] G. Liang, Z. Diao, and H. Jiang, “Uncertainty Analysis Based Attention [46] P. M. Bruntha, S. I. A. Pandian, J. Anitha, S. S. Abraham, and S. N.
Network for Lung Nodule Segmentation from CT Images,” ACM Int. Kumar, “A novel hybridized feature extraction approach for lung nodule
Conf. Proceeding Ser., pp. 50–55, 2022, doi: 10.1145/3546607.3546615. classification based on transfer learning technique,” J. Med. Phys., vol.
[37] Z. Wang, J. Men, and F. Zhang, “Improved V-Net lung nodule 47, no. 1, pp. 1–9, 2022, doi: 10.4103/jmp.jmp_61_21.
segmentation method based on the selective kernel,” Signal, Image Video [47] M. Al-Shabi, K. Shak, and M. Tan, “ProCAN: Progressive growing
Process., 2022, doi: 10.1007/s11760-022-02387-w. channel attentive non-local network for lung nodule classification,”
[38] W. He and B. Li, “Knowledge-Based Systems An ISHAP-based Pattern Recognit., vol. 122, no. February, pp. 1–32, 2022, doi:
interpretation-model-guided classification method for malignant 10.1016/j.patcog.2021.108309.
pulmonary nodule,” Knowledge base systems, vol. 237, no. 2, pp. 1–44, [48] H. Huang, Y. Li, R. Wu, Z. Li, and J. Zhang, “Benign-malignant
2022. classification of the pulmonary nodule with deep feature optimization
[39] R. Huidrom, Y. J. Chanu, and K. M. Singh, “Neuro-evolutional based framework,” Biomed. Signal Process. Control, vol. 76, no. April, p.
computer-aided detection system on computed tomography for the early 103701, 2022, doi: 10.1016/j.bspc.2022.103701.
detection of lung cancer,” Multimed. Tools Appl., vol. 81, no. 22, pp. [49] S. A. Mahmood and H. A. Ahmed, “An improved CNN-based architecture
32661–32673, 2022, doi: 10.1007/s11042-022-12722-5. for automatic lung nodule classification,” Med. Biol. Eng. Comput., vol.
[40] S. Suresh and S. Mohan, “ROI-based feature learning for efficient true 60, no. 7, pp. 1977–1986, 2022, doi: 10.1007/s11517-022-02578-0.
positive prediction using convolutional neural network for lung cancer [50] S. Dodia, B. Annappa, and M. A. Padukudru, “A Novel Artificial
diagnosis,” Neural Comput. Appl., vol. 32, no. 20, pp. 15989–16009, 2020, Intelligence-Based Lung Nodule Segmentation and Classification System
doi: 10.1007/s00521-020-04787-w. on CT Scans,” Commun. Comput. Inf. Sci., vol. 1568 CCIS, pp. 552–564,
[41] S. Li et al., “Predicting lung nodule malignancies by combining deep 2022, doi: 10.1007/978-3-031-11349-9_48.
convolutional neural network and handcrafted features,” Phys. Med. Biol., [51] J. Lyu, X. Bi, and S. H. Ling, “Multi-level cross residual network for lung
vol. 64, no. 17, 2019, doi: 10.1088/1361-6560/ab326a. nodule classification,” Sensors (Switzerland), vol. 20, no. 10, pp. 1–14.
[42] S. Nageswaran et al., “Lung Cancer Classification and Prediction Using 2020, doi: 10.3390/s20102837.
Machine Learning and Image Processing,” Biomed Res. Int., vol. 2022,
2022, doi: 10.1155/2022/1755460. ,
[43] D. Zhao, Y. Liu, H. Yin, and Z. Wang, “An attentive and adaptive 3D CNN
for automatic pulmonary nodule detection in CT image,” Expert Syst.
Appl., vol. 211, no. 1, pp. 1–24, 2023, doi: 10.1016/j.eswa.2022.118672.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/