1-s2.0-S1110016821002027-main
1-s2.0-S1110016821002027-main
1-s2.0-S1110016821002027-main
H O S T E D BY
Alexandria University
a
Department of Basic Science, Faculty of Engineering, Pharos University, Alexandria, Egypt
b
Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for
Science, Technology and Maritime Transport, Alexandria, Egypt
KEYWORDS Abstract In this work, a new framework for breast cancer image segmentation and classification is
Mammography; proposed. Different models including InceptionV3, DenseNet121, ResNet50, VGG16 and Mobile-
Breast cancer; NetV2 models, are applied to classify Mammographic Image Analysis Society (MIAS), Digital
Segmentation; Database for Screening Mammography (DDSM) and the Curated Breast Imaging Subset of
Deep learning; DDSM (CBIS-DDSM) into benign and malignant. Moreover, the trained modified U-Net model
U-Net; is utilized to segment breast area from the mammogram images. This method will aid as a radiol-
Transfer learning ogist’s assistant in early detection and improve the efficiency of our system. The Cranio Caudal
(CC) vision and Mediolateral Oblique (MLO) view are widely used for the identification and diag-
nosis of breast cancer. The accuracy of breast cancer diagnosis will be improved as the number of
views is increased. Our proposed frame work is based on MLO view and CC view to enhance the
system performance. In addition, the lack of tagged data is a big challenge. Transfer learning and
data augmentation are applied to overcome this problem. Three mammographic datasets; MIAS,
DDSM and CBIS-DDSM, are utilized in our evaluation. End-to-end fully convolutional neural net-
works (CNNs) are introduced in this paper. The proposed technique of applying data augmentation
with modified U-Net model and InceptionV3 achieves the best result, specifically with the DDSM
dataset. This achieves 98.87% accuracy, 98.88% area under the curve (AUC), 98.98% sensitivity,
98.79% precision, 97.99% F1 score, and a computational time of 1.2134 s on DDSM datasets.
Ó 2021 THE AUTHORS. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria
University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/
licenses/by-nc-nd/4.0/).
technique [4] plays an important role in breast cancer diagno- fine transitions (translation, zooming, flipping, mirroring,
sis. Machine learning is outperforming the conventional hand- rotation, etc.).
crafted technique. It helps in selecting the most important Utilizing a pre-trained model instead of using model from
features. Deep learning [5] plays an important role in enhanc- scratch is called transfer learning. Training the neural network
ing the results in the field of biomedical engineering, in partic- from scratch needs substantial data and computational power
ular the Deep Convolutional Neural Networks (CNNs), which [23].
can be easily applied with state-of-the-art efficiency. A. The main contribution in this paper is divided into two
Shrestha et al. developed a new algorithm to describe deep phases; classification phase based on different deep models,
learning [6]. The models included InceptionV3, DenseNet121, and segmentation phase before classification. First, different
ResNet50, VGG16 and MobileNetV2 models, for the classifi- models including InceptionV3, DenseNet121, ResNet50,
cation process [7–9]. VGG16 and MobileNetV2 are employed to classify our MIAS,
Scanning Mammograms of 28, 27, 342 women were studied DDSM and CBIS-DDSM images into benign or malignant.
retrospectively to assess the effect of optical density and the Second, the segmentation phase based on applying modified
number of views on cancer detection [10]. 76% of the predicted segmentation of the U-Net model is utilized to extract ROI
invasive cancers were observed with an MLO view and an opti- (breast region) and remove unwanted regions. This step plays
cal density of less than 1.4. It increased to 95% when the opti- a vital role in enhancing the images to be suitable as input to
cal density was greater than 1.4 with MLO and CC views. A the classification phase and improving the system perfor-
retrospective analysis of 83 histologically proven breast can- mance. After segmentation phase, our different deep learning
cers using a paired t-test for cancer diagnosis was performed models are applied to the segmented images to classify them
in the three mammographic views (CC, MLO) and Medio Lat- into benign or malignant. The data augmentation technique
eral (ML) and in combinations of views. It was found that the is employed on DDSM and MIAS datasets to resolve the scar-
exposure to mass description was substantially improved when city of datasets. Also, transfer leaning is used to minimize con-
a double-view method was used. suming time and computing resources.
The CNN is a deep architecture that is used in image pro- The novelty in our work is based on exploring the most
cessing which compromises two main layers; the convolution- powerful models that are the most successful end-to-end deep
ary layer and the pooling layer. The output of the neurons that models in computer vision, including InceptionV3, Dense-
are connected to the local area network at the input by sharing Net121, ResNet50, VGG16 and MobileNetV2. These models
weights and biases is calculated by the convolutional layer. are explored on three different mammography datasets which
The pooling layer subsamples the output of the convolutional explain the distinct cases of breast thickness, breast size, and
layer and reduces the data size. Learning millions of parame- age of the patients taken from the MIAS, DDSM and CBIS-
ters of a deep CNN includes a vast number of training images DDSM databases.
as well as the availability of its ground truth, which actually The rest of this paper is organized as follows. Details of the
prohibits many superior deep CNNs from being applied to methodology used in this paper for the segmentation and clas-
medical applications. sification of mammograms are explained in Section 2. Section 3
Different CNNs for the mass detection task are explained introduces and discusses the obtained results. Section 4 repre-
by J. Arevalo et al. [11]. Their experimentation was performed sents the main conclusions.
on the Breast Cancer Digital Repository Film Mammography
(BCDR-FM) dataset. D. Abdelhafizet al. [12] introduced a 2. Methodology
framework for a pre-trained CNN on DDSM database. Breast
cancer classification algorithm from scratch is implemented by This paper introduces novel strategies to segment and classify
L. Tsochatzidis et al. [13] and improves the ability to segment mammography. Pre-trained modified U-Net model and differ-
normal and irregular breast tissue based on the application of ent deep learning models are utilized. This includes versions of
deep-learning medical imaging technology. InceptionV3, DenseNet121, ResNet50, VGG16 and Mobile-
The U-Net model is the main model in images segmenta- NetV2, as the beginning of scratch training leads to over-
tion. O. Ronneberger et al. suggested a U-Net model for fitting, time consuming and a need for high computing power.
biomedical images segmentation [14]. N. Alamet al. [15] devel- Data augmentation and fine-tuning are often used to resolve
oped an automated segmentation approach for biomedical the scarcity of mammography images. Figs. 1 and 2 illustrate
images. In their technique, the region of interest (ROI) was our proposed frame work (Data augmentation + modified
manually extracted, and the wavelet-based process was per- U-Net model + Classifier Networks).
formed to improve the frequency of spatial picture. The con-
tour of the calcify area was extracted based on segmentation 2.1. Transfer learning
technique which was explained in the work of S. Duraisamy
et al. [16]. Finally, the modified segmentation of the U-Net
model was applied to extract the ROI (breast region) and Transfer learning is the golden key for using small datasets,
remove the unwanted regions, in some papers in the classifica- e.g. medical images, which are impossible to collect in vast
tion of breast cancer [17–21]. quantities than most datasets. A great deal of data, power
Data augmentation technique creates new samples of the and time is required to train deep learning models from
training data set by performing random transitions process scratch. So, pre-trained models and only fine tuning are used
to available datasets [22]. This has a lot effects, including to solve these problems. For transfer learning, we apply the
speeding up the process of convergence and avoiding over- pre-trained InceptionV3, DenseNet121, ResNet50, VGG16
fitting. The easiest approach to small datasets is to perform and MobileNetV2 models [7–10].
Deep learning in mammography images segmentation and classification 4703
2.2. Data augmentation effect the encoder and decoder network. The traditional CNN, which
contains semantic information and less spatial information is
To achieve the best performance, we need a vast number of called encoder part. However, spatial information is also
training samples. Therefore, data augmentation technique is important for segmentation of semantic information. The par-
performed to increase the number of original input data by ticular information from the decoder part is fed into U-Net,
creating new samples of training data. In this paper, the rota- where semantic information is extracted from the lower most
tion is applied where each original image rotates 0, 90, 180 and layer of the U-Net network. The decoder part contains the
270 degrees. Therefore, each image is augmented into four high-resolution features, these features are extracted from the
images. encoder portion, skipping the link and providing the fine seg-
ment structures. The leaky ReLU and normalization of the
instance are utilized instead of rectified linear unit (ReLU)
2.3. Segmentation network and standardization of batch is, respectively, 32 32 pixels
for the size of the patch and 42 feature maps that are used
The segmentation of the ROI is a crucial step in the automated for the highest layer.
analysis of mammography images for early detection of breast
cancer. The segmentation could be considered as a classifica- 2.4. Deep learning models
tion task to classify each pixel in the dataset images, either
ROI (breast region) nor background (BG) [24]. The input of InceptionV3, DenseNet121, ResNet50, VGG16 and Mobile-
this phase is the mammography images and the output is the NetV2 models are considered the most successful deep CNN
ROI (breast regions) images. Then, the ROI images are for classification in the computer vision field, to classify mam-
masked with the original mammography images to prepare mography. In order to begin the fine-tuning process on the
them as an input to the classifier phase. Our mammography mammography dataset, certain parameters must then be chan-
images are segmented based on the modified U-Net model ged. For example, 1000 classes are replaced with the two clas-
[25]. The modified segmentation of U-Net is compromised of sification classes, benign and malignant, layer.
4704 W.M. Salama, M.H. Aly
Table 1 Number of samples of training and testing for all datasets used, techniques for data augmentation and number of images
produced.
Databases Samples of training Samples of testing Total samples Number of images after the process of augmentation
DDSM 451 113 564 1804
MIAS 257 65 322 1028
CBIS-DDSM NA 330 330 NA
Deep learning in mammography images segmentation and classification 4705
Table 3 Classification results with and without data augmentation for different models for the DDSM, MIAS and CBIS-DDSM
based on the MLO view.
Classification results with and without data augmentation
Model Accuracy % AUC % Sensitivity % Precision % F1-score %
Without With Without With Without With Without With Without With
augmentation Augmentation Augmentation augmentation Augmentation
DDSM Database
InceptionV3 88.87 93.85 87.99 92.99 88.74 93.54 88.59 93.19 87.99 93.23
DenseNet121 85.97 91.57 84.99 90.89 85.89 91.49 85.84 91.52 85.85 91.35
ResNet50 84.54 89.58 83.99 88.99 83.94 89.84 84.12 90.23 84.21 90.32
VGG16 82.81 85.98 81.99 84.88 82.74 85.94 82.43 84.99 82.34 85.32
MobileNetV2 80.97 82.77 80.89 82.65 81.74 82.98 80.88 82.76 80.99 82.65
MIAS Database
InceptionV3 86.77 91.65 85.89 90.99 86.84 91.34 85.48 91.19 85.88 90.89
DenseNet121 83.87 89.99 82.88 90.32 83.77 89.59 83.74 90.22 83.75 89.88
ResNet50 82.65 88.58 81.79 87.99 81.96 88.54 82.46 88.43 82.65 87.89
VGG16 80.98 86.78 80.87 86.98 80.84 86.94 80.83 86.78 80.84 86.79
MobileNetV2 79.97 84.77 79.54 84.85 79.84 84.98 80.21 84.86 80.11 84.95
CBIS-DDSM Database
InceptionV3 84.21 90.55 84.87 90.49 84.87 90.23 83.99 90.59 83.96 90.69
DenseNet121 82.47 87.69 82.65 87.35 82.57 87.69 81.99 87.42 82.34 87.58
ResNet50 81.65 86.78 80.89 86.89 80.99 86.94 81.32 86.53 81.22 86.23
VGG16 80.98 84.68 80.77 84.68 81.89 84.94 81.99 84.88 81.84 84.89
MobileNetV2 79.82 83.47 78.99 83.95 79.87 83.99 79.99 83.98 79.89 83.99
Table 4 Segmentation with classification results with and without data augmentation for different models based on the MLO view.
Segmentation with classification results without data augmentation
Model Accuracy% AUC% Sensitivity% Precision% F1-score%
Without With Without With Without With Without With Without With
augmentation Augmentation Augmentation augmentation Augmentation
DDSM Database
Inception V3 + Modified segmentation of the U- 95.78 98.87 95.69 98.88 95.88 98.98 95.54 98.79 94.78 97.99
Net
DenseNet121 + Modified segmentation of the U- 93.99 96.99 94.54 97.14 94.77 97.47 94.89 97.49 94.34 97.24
Net
ResNet50 + Modified segmentation of the U-Net 93.57 96.87 93.97 96.47 94.21 97.01 94.61 97.11 93.89 96.99
VGG16 + Modified segmentation of the U-Net 92.67 95.97 92.34 96.11 92.81 95.88 92.81 96.21 92.69 95.99
MobileNetV2 + Modified segmentation of the U- 90.87 93.88 91.47 93.97 91.21 93.78 90.97 93.87 90.99 92.99
Net
MIAS Database
Inception V3 + Modified segmentation of the U- 93.88 96.87 93.79 97.21 93.98 96.88 93.97 96.99 93.79 97.12
Net
DenseNet121 + Modified segmentation of the U- 91.89 95.78 91.94 94.99 90.99 95.77 91.59 95.69 90.88 95.84
Net
ResNet50 + Modified segmentation of the U-Net 90.67 94.27 90.91 94.87 90.98 94.78 90.81 94.61 89.89 94.59
VGG16 + Modified segmentation of the U-Net 89.47 93.87 89.54 93.61 89.78 93.79 89.81 93.78 89.49 93.69
MobileNetV2 + Modified segmentation of the U- 88.45 91.42 87.99 91.27 88.81 90.78 88.87 91.43 88.99 90.99
Net
CBIS-DDSM Database
Inception V3 + Modified segmentation of the U- 91.78 94.18 91.29 94.65 91.65 94.76 91.97 94.65 91.89 94.12
Net
DenseNet121 + Modified segmentation of the U- 90.32 92.34 90.44 92.56 89.99 92.77 90.59 92.69 89.78 92.84
Net
ResNet50 + Modified segmentation of the U-Net 89.47 91.88 89.78 91.87 88.58 91.78 89.81 91.61 88.89 91.59
VGG16 + Modified segmentation of the U-Net 87.32 90.79 87.21 91.01 87.78 90.65 87.32 90.98 87.45 90.69
MobileNetV2 + Modified segmentation of the U- 86.45 89.78 85.99 90.27 85.89 89.98 86.87 90.23 85.99 89.99
Net
4706
Table 5 Classification results with and without data augmentation for different models for the DDSM, MIAS and CBIS-DDSM based on MLO and CC views.
Segmentation with classification results without data augmentation
Model Accuracy% AUC% Sensitivity% Precision% F1-score%
Without With Without With Without With Without With Without With
augmentation augmentation Augmentation augmentation Augmentation
DDSM Database
Inception V3 + Modified segmentation of the U-Net 96.45 99.43 96.87 99.22 96.96 99.12 96.86 98.99 95.88 98.98
DenseNet121 + Modified segmentation of the U-Net 95.89 98.89 95.44 98.21 95.67 98.98 95.36 98.69 95.63 98.33
ResNet50 + Modified segmentation of the U-Net 94.87 97.97 94.24 97.87 95.98 98.11 95.69 97.98 94.49 97.79
VGG16 + Modified segmentation of the U-Net 93.87 96.21 93.65 96.89 93.88 96.78 93.88 96.89 93.79 96.87
MobileNetV2 + Modified segmentation of the U-Net 92.52 94.98 92.87 94.87 92.98 94.98 91.99 94.87 92.87 94.54
MIAS Database
Inception V3 + Modified segmentation of the U-Net 94.32 97.87 94.59 98.01 94.88 97.78 94.32 97.32 94.52 97.99
DenseNet121 + Modified segmentation of the U-Net 92.89 96.48 92.88 95.99 92.99 96.57 92.98 96.59 92.88 96.21
ResNet50 + Modified segmentation of the U-Net 91.57 95.63 91.89 95.01 91.99 95.48 91.51 95.22 91.08 95.36
VGG16 + Modified segmentation of the U-Net 90.54 94.36 90.95 94.77 90.88 94.89 90.88 94.35 90.99 94.32
MobileNetV2 + Modified segmentation of the U-Net 89.99 93.99 88.89 93.98 89.98 93.28 89.87 93.85 89.87 92.99
CBIS-DDSM Database
Inception V3 + Modified segmentation of the U-Net 93.21 96.01 93.59 96.11 93.32 96.12 93.63 96.32 93.29 96.12
DenseNet121 + Modified segmentation of the U-Net 92.96 94.96 92.64 94.36 91.89 94.37 92.21 94.89 91.88 94.21
ResNet50 + Modified segmentation of the U-Net 91.27 93.58 91.81 93.81 91.22 93.96 91.32 93.32 90.99 93.29
VGG16 + Modified segmentation of the U-Net 89.96 92.89 89.89 92.98 89.98 92.85 89.98 92.38 89.89 92.23
MobileNetV2 + Modified segmentation of the U-Net 88.95 90.32 88.59 91.99 87.79 91.98 88.98 92.01 87.99 92.21
Table 6 Computational time of the segmentation with classification and data augmentation system.
Model Time, s
Inception V3 + Modified segmentation of the U-Net 1.2134
DenseNet121 + Modified segmentation of the U-Net 2.2365
ResNet50 + Modified segmentation of the U-Net 2.3254
VGG16 + Modified segmentation of the U-Net 1.9897
MobileNetV2 + Modified segmentation of the U-Net 1.6587
Table 7 Quantitative distinction between our model and the state of the art for the task of classification of the DDSM database.
Reference Number of mammograms Name of Accuracy AUC Sensitivity Precision F1-score DC IoU
Databases Database % % % % % % %
Our proposed 1804 DDSM 98.87 98.88 98.98 98.79 97.99 91.89 92.99
work
[30] DDSM NA NA NA NA NA NA 88 NA
[31] 200 MIAS and 97.73 NA 92.50 NA NA NA NA
DDSM
[32] 402 MIAS NA NA NA NA NA NA NA
[33] 300 DDSM 98 NA 97.40 NA NA NA NA
[34] 251 MIAS 96 NA 83 NA NA NA NA
[35] 728 IN-breast 95.64 NA 97.14 NA NA NA NA
[36] 1804 DDSM 97.98 98.46 97.63 96.51 95.97 NA NA
Table 8 Performance comparison of current CADs using combination MLO and CC view mammographic features.
Reference Accuracy % AUC % Sensitivity % Precision % F1-score % DC % IoU %
Our proposed work 99.43 99.22 99.12 98.99 98.98 94.79 94.89
[37] NA NA 85 NA NA NA NA
[38] NA NA 91 NA NA NA NA
[39] NA NA 93 NA NA NA NA
[40] NA NA 80 NA NA NA NA
[41] 83.13 NA 77.08 NA NA NA NA
[42] 93.98 NA 97.37 NA NA NA NA
[43] 96.6 NA 95 NA NA NA NA
0
2: y \ y are rotated at four angles of 0, 90, 180 and 270 degrees to
DC ¼ ð6Þ
jyj þ jy0 j increase accuracy, as shown in Table 1.
Table 2 explains the segmentation results ofIoU andDC for
where FP is the false positive of the non-lesion pixel segmented our databases based on the modified U-Net model. The
as a lesion pixel that means the sample is malignant which is obtained classification results with and without data augmen-
wrong diagnosis, and FN is the false negative of lesion pixel tation are represented in Table 3 for the MIAS, DDSM and
segmented as a non-lesion pixel that means the sample is CBIS-DDSM databases utilizing the MLO view.
benign which is wrong diagnosis. TP is the true positive that Moreover, Table 4 explains the results of classification with
means that the database sample is malignant which is correct and without data augmentation after utilizing the modified U-
diagnosis, while TN is the true negative that means that the Net model to segment the databases based on the MLO view.
database sample is benign which is correct diagnosis. TheIoU Table 5 introduces our proposed CAD system based in the
is utilized to quantify the percentage overlap between the tar- combination between MLO and CC views.
get mask and our prediction malignant or benign dignosis. Table 6 explains the computing time of all the system. The
When IoU increases, the system performance is enhanced. proposed segmentation and classification systems are com-
0
Here,yrepresents the ground truth mask, andy represents the pared with other recent CAD systems and the results are
probability map generated by neural network. Moreover, DC shown in Table 6. Our proposed system is compared with
is called loss function. other systems illustrated in Refs. [30–37] and the results are
In our work, mammogram datasets are chosen using the shown in Tables 3-6. The comparison declares the superiority
Python simulator to test the proposed method. The number of our proposed system in accuracy, AUC, precision and F1
of analysis and research samples for all datasets is illustrated score as shown in Tables 7 and 8.
in Table 1. Data augmentation is conducted on samples that
4708 W.M. Salama, M.H. Aly
4. Conclusion Representations, ICLR 2016, San Juan, Puerto Rico, 2-4 May,
pp. 1–4, 2016.
[8] X. Zhang, J. Zou, K. He, J. Sun, Accelerating very deep
In this paper, a new framework is proposed based on different convolutional networks for classification and detection, IEEE
deep learning models, including InceptionV3, DenseNet121, Trans. Pattern Anal. Mach. Intell. 38 (10) (2015) 1943–1955.
ResNet50, VGG16 and MobileNetV2, for breast cancer diagno- [9] H. Shin, H.R. Chang, G. Roth, L. Lu, X. Mingchen, I. Ziyue, Y.
sis using digitized mammograms with high accuracy and low Nogues, D. Jianhua, Mollura, R.M. Summers, Deep
computational time. The proposed framework utilizes the modi- convolutional neural networks for computer-aided detection:
fied segmentation of the U-Net model for the segmentation pro- CNN architectures, dataset characteristics and transfer learning,
cess. The diagnosis performance is evaluated in terms of the IoU, IEEE Trans. Med. Imaging, 35 (5) (2016) 1285–1298.
DC, accuracy, sensitivity, precision, AUC, F1- score and compu- [10] S. Sasikala, M. Bharathi, M. Ezhilarasi, M. Ramasubba Reddy,
S. Arunkumar, Fusion of MLO and CC view binary patterns to
tational time, where the training is initialized by weights of a net-
improve the performance of breast cancer diagnosis, Curr. Med.
work that has already been trained using another dataset. The Imaging 14 (4) (2018) 651–658.
data augmentation technique is used to resolve data shortcom- [11] J. Arevalo, F.A. González, R. Ramos-Pollán, J.L. Oliveira, M.
ings and introduce variety to the dataset, which enhances the gen- A.G. Lopez, Representation learning for mammography mass
eralization capability of the pre-trained network and thus lesion classification with convolutional neural networks,
alleviates over-fitting, as a large amount of training data is Comput. Methods Programs Biomed. 127 (15) (2016) 248–257.
needed. This study is very beneficial and shows that there is no [12] D. Abdelhafiz, C. Yang, R. Ammar, S. Nabavi, Deep
need for a human interface with pre- or post-processing or convolutional neural networks for mammography: Advances,
hand-crafted features. The data augmentation with modified seg- challenges and applications, BMC Bioinf. 20 (11) (2019) 1–20.
mentation of the U-Net model and InceptionV3model achieves [13] L. Tsochatzidis, L. Costaridou, I. Pratikakis, Deep learning for
breast cancer diagnosis from mammograms—a comparative
the best performance: 98.87% accuracy, 98.88% area under the
study, J. Imaging 5 (3) (2019) 37.
curve (AUC), 98.98% sensitivity, 98.79% precision, 97.99% F1 [14] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional
score and computational time 1.2134 s on DDSM datasets. The networks for biomedical image segmentation, International
proposed frame work when utilizing the combination between Conference on Medical Image Computing and Computer-
MLO and CC view achieves better performance than utilizing Assisted Intervention, 9351, Springer, Cham, 2015, pp. 234–241.
MLO view only, where the metrics are enhanced to: 99.43% accu- [15] N. Alam, A. Oliver, E.R. Denton, R. Zwiggelaar, Automatic
racy, 99.22% AUC, 99.12% sensitivity, 98.99% precision, segmentation of microcalcification clusters, Annual Conference
98.98% F1 score. The obtained results reveal that our proposed on Medical Image Understanding and Analysis, 894, Springer,
models achieve better performance than that in the literature with Cham, 2018, pp. 251–261.
more than 10%. [16] S. Duraisamy, S. Emperumal, Computer-aided mammogram
diagnosis system using deep learning convolutional fully
complex-valued relaxation neural network classifier, IET
Declaration of Competing Interest Comput. Vis., 11 (8) (2017) 656–662.
[17] ß engür, Z. Kadiroğlu, Y. Guo, V. Bajaj, Ü. Budak,
E. Deniz, A. S
The authors declare that they have no known competing Transfer learning based histopathologic image classification for
financial interests or personal relationships that could have breast cancer detection, Health Inform. Sci. Syst. (1) (2018) 1–7.
appeared to influence the work reported in this paper. [18] S. Kwok, Multiclass classification of breast cancer in whole-slide
images, International conference image analysis and
recognition, 10882, Springer, Cham, 2018, pp. 931–940.
References
[19] Li, Chen, Dan Xue, Hu. Zhijie, Hao Chen, Yao. Yudong, Yong
Zhang, Mo Li, Qian Wang, Xu. Ning, A Survey for breast
[1] R.L. Siegel, K.D. Miller, A. Jemal, Cancer statistics, CA Cancer histopathology image analysis using classical and deep neural
J. Clin. 65 (1) (2015) 5–29. networks, in: International Conference on Information
[2] D. Saslow, C. Boetes, W. Burke, S. Harms, M.O. Leach, C.D. Technologies in Biomedicine. Springer, Cham, vol. 1011, 2019,
Lehman, E. Morris, E. Pisano, M. Schnall, S. Sener, R.A. Smith, pp. 222–233.
American cancer society guidelines for breast screening with [20] D.A. Ragab, M. Sharkas, S. Marshall, J. Ren, Breast cancer
MRI as an adjunct to mammography, CA Cancer J. Clin. 57 (2) detection using deep convolutional neural networks and support
(2007) 75–89. vector machines, PeerJ 7 (4) (2019) e6201.
[3] I. Maniecka-Bryła, M. Bryła, P. Bryła, M. Pikala, ‘‘The burden [21] C. Liang, M. Li, Z. Bian, W. Lv, D. Zeng, J. Ma, Establishment
of premature mortality in Poland analysed with the use of of a deep feature-based classification model for distinguishing
standard expected years of life lost, BMC Public Health 15 (1) benign and malignant breast tumors on full-filed digital
(2015) 101. mammography, J. Southern Med. Univ. 39 (1) (2019) 88–92.
[4] R.A. Hubbard, K. Kerlikowske, C.I. Flowers, B.C. Yankaskas, [22] S.C. Wong, A. Gatt, V. Stamatescu, and M.D. McDonnell,
W. Zhu, D.L. Miglioretti, Cumulative probability of false- Understanding data augmentation for classification: when to
positive recall or biopsy recommendation after 10 years of warp, in: 2016 International Conference on Digital Image
screening mammography: A cohort study, Ann. Intern. Med. Computing: IEEE Techniques and Applications (DICTA),
155 (8) (2011) 481–492. Gold Coast, QLD, Australia, pp. 1–6, 2016.
[5] L. Tsochatzidis, L. Costaridou, I. Pratikakis, Deep learning for [23] Y. Chen, T. Zheming, Z. Yang, S. Holly, L. Norford, Transfer
breast cancer diagnosis from mammograms - a comparative learning with deep neural networks for model predictive control
study, J. Imaging 5 (37) (2019) 1–11. of HVAC and natural ventilation in smart buildings, J. Cleaner
[6] A. Shrestha, A. Mahmood, Review of deep learning algorithms Prod. 254 (119866) (2020) 1–10.
and architectures, IEEE Access 7 (2019) 53040–53065. [24] M.B. Tayel, A.M. Elbagoury, Breast infrared thermography
[7] S. Targ, D. Almeida, K. Lyman, Resnet in Resnet: Generalizing segmentation based on adaptive tuning of a fully convolutional
residual architectures, In International Conference on Learning network, Curr. Med. Imaging 16 (5) (2020) 611–621.
Deep learning in mammography images segmentation and classification 4709
[25] M.S. Hossain, Micro-calcification segmentation using modified [36] W.M. Salama, A.M. Elbagoury, M.H. Aly, Novel breast cancer
u-net segmentation network from mammogram images, J. King classification framework based on deep learning, IET Image
Saud University-Comput. Inform. Sci., in Press, online 4 Nov. Proc. 14 (13) (2020) 3254–3259.
2019. http://doi.org/10.1016/j.jksuci.2019.10.014. [37] R.G. Blanks, M.G. Wallis, R.M. Given-Wilson, Observer
[26] . variability in cancer detection during routine repeat (incident)
[27] . Accessed 7 June 2019. mammographic screening in a study of two versus one view
[28] https://wiki.cancerimagingarchive.net/display/Public/CBIS- Mammography, J. Med. Screening 6 (3) (1999) 152–158.
DDSM. Accessed 1 June 2019. [38] S. Paquerault, N. Petrick, H.P. Chan, B. Sahiner, M.A. Helvie,
[29] M. Long, Y. Cao, Z. Cao, J. Wang, H. Zhu, M.I. Jordan, Improvement of computerized mass detection on mammograms:
Transferable representation learning with deep adaptation Fusion of two-view information, Med. Phys. 29 (2) (2002) 238–
networks, IEEE Trans. Pattern Anal. Mach. Intell. 41 (12) 247.
(2018) 3071–3085. [39] S.J. Kim, W.K. Moon, N. Cho, J.H. Cha, S.M. Kim, J.G. Im,
[30] A. Amyar, R. Modzelewski, H. Li, S. Ruan, Multi-task deep Computer-aided detection in digital Mammography:
learning based CT imaging analysis for COVID-19 pneumonia: Comparison of craniocaudal, mediolateral oblique, and
Classification and segmentation, Comput. Biol. Med. 126 (2020) mediolateral views, Radiology 241 (3) (2006) 695–701.
104037. [40] B. Sahiner, H.P. Chan, L.M. Hadjiiski, M.A. Helvie, C.
[31] M. Dong, X. Lu, Y. Ma, Y. Guo, Y. Ma, K. Wang, An efficient Paramagul, J. Ge, Joint two-view information for
approach for automated mass segmentation and classification in computerized detection of micro calcifications on
mammograms, J. Digit. Imaging 28 (2) (2015) 613–625. Mammograms, Med. Phys., 33 (1) (2006) 2574–2585.
[32] A.K. Mohanty, M. Senapati, S. Beberta, S.K. Lenka, Retraction [41] R.D. Dantas, M.Z. do Nascimento, R. de Souza Jacomini, D.C.
Note to: Mass classification method in mammograms using Pereira and R.P. Ramos, Fusion of two-view information: SVD
correlated association rule mining, Neural Comput. Appl. 23 (2) based modeling for computerized classification of breast lesions
(2013) 273–281. on Mammograms, in: Mammography-recent advances. InTech.,
[33] W. Xie, Y. Li, Y. Ma, Breast mass classification in digital pp. 261–278, 2012.
mammography based on extreme learning machine, Neuro [42] L. Sun, L. Li, W. Xu, W. Liu, J. Zhang, G. Shao, A novel
Comput. 173 (3) (2016) 930–941. classification scheme for breast masses based on multi-view
[34] K.U. Sheba, G.S. Raj, An approach for automatic lesion information fusion, in: Proceedings of 4th IEEE International
detection in mammograms, Cogent Eng., 5 (1) (2018) Article Conference on Bioinformatics and Biomedical Engineering
1444320, pp. 1–16, http://doi.org/10.1080/ (iCBBE), 2010, pp. 1–4.
23311916.2018.1444320. [43] S.M. Sasikala, Ezhilarasi, S. Arun Kumar, Detection of breast
[35] M.A. Al-antari, M.A. Al-masni, M.T. Choi, S.M. Han, T. Seong cancer using fusion of MLO and CC view features through a
Kim, A fully integrated computer-aided diagnosis system for hybrid technique based on binary firefly algorithm and
digital X-ray mammograms via deep learning detection, optimum-path forest classifier, in: Applied Nature-Inspired
segmentation, and classification, Int. J. Med. Inf. 117 (2018) Computing: Algorithms and Case Studies, pp. 23–40. Springer,
44–54. Singapore, 2020.