0% found this document useful (0 votes)
63 views6 pages

Fabric Defect Detection Reviewed Digital - Image - Processing

Uploaded by

tanziddipto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views6 pages

Fabric Defect Detection Reviewed Digital - Image - Processing

Uploaded by

tanziddipto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Textile Fabric Defect Detection and Classification:

An Ensemble Deep Learning Approach

Abstract—Fabric defects significantly impact the textile in- on manual visual inspection. While this approach offers some
dustry. Maintaining high fabric quality is crucial in the textile level of control, it suffers from significant drawbacks. Manual
industry. However, defects can occur during various stages of inspection is time-consuming, subjective, and prone to human
production. Automating defect detection reduces reliance on
manual inspection, and leads to faster throughput and increased error. Additionally, manual inspection becomes impractical for
productivity. Existing fabric defect detection methods often rely high-volume production lines. Recent advancements in deep
on manual visual inspection, which is time-consuming, subjective, learning offer a promising solution for automated fabric defect
and prone to human error. Complex algorithms may not be suit- detection and classification. Deep learning models, particularly
able for real-time deployment in resource-constrained environ- convolutional neural networks (CNNs), have demonstrated
ments. This research addresses the limitations of existing methods
by proposing a deep learning and Ensemble-based approach for remarkable capabilities in image recognition and object detec-
fabric defect detection and classification. A custom fabric defect tion tasks. By leveraging these capabilities, we can automate
dataset was constructed specifically designed for this research. defect detection, freeing up human resources for other tasks
Our proposed approach achieved promising results. YOLOv8m and accelerating production. Existing automated fabric defect
and YOLOv8n-obb variants were fine-tuned for defect detection, detection methods often rely on complex algorithms that
achieving a mAP@50 of 0.89. The ensemble approach based
on Weighted voting achieved an overall classification accuracy may not be suitable for real-time deployment in resource-
of 90% and a balanced F1 score across six defect classes: constrained environments. Additionally, some methods require
Broken-button (95%), Button-hike (93%), Color-defect (86%), vast amounts of labeled data for training, which can be time-
Foreign-yarn (90%), Hole (84%), and Swing-error (88%). This consuming and resource-intensive to collect.
demonstrates the effectiveness of lightweight object detection and This research addresses these limitations by proposing a novel
ensemble approach using five CNN models (VGG16, ResNet50,
MobileNet, InceptionV3, Xception) for robust fabric defect de- deep learning and ensemble-based approach for fabric defect
tection and classification, highlighting its potential for automated detection and classification. We leverage the efficiency and ac-
quality control in the textile industry. curacy of YOLOv8, a state-of-the-art object detection model,
Index Terms—Fabric defect detection, Classification, Ensem- for defect localization. YOLOv8’s lightweight design makes
ble learning, Deep learning, YOLOv8, CNN, Object detection, it suitable for real-time deployment in resource-constrained
Weighted averaging, Mean Average Precision (mAP), Textile
industry environments. Furthermore, we contribute a custom fabric
defect dataset specifically designed for this research. This
dataset encompasses a variety of defect types commonly en-
I. I NTRODUCTION
countered in the textile industry, facilitating model training and
The textile industry is one of the foundations of the global evaluation. For defect classification, we employ an ensemble
economy, contributing significantly to both employment and learning approach utilizing five pre-trained CNNs (VGG16,
GDP. High-quality fabric is essential for the textile industry, ResNet50, MobileNet, InceptionV3, Xception) with custom
but defects can arise during production. One of the major chal- top layers. This ensemble approach leverages the strengths
lenges faced by this industry is the detection and classification of each individual model, combining their predictions through
of fabric defects. Fabric defects costs make up about 80% of weighted voting to achieve more robust and accurate classifi-
the total costs in the garment industry [1]. A study by the cation.
Waste and Resources Action Programme found that up to 15% Our proposed approach achieved promising results. The fine-
of knitted fabric goes to waste during production due to defects tuned YOLOv8 models for defect detection achieved a mean
such as snags and needle lines [1]. The exact global financial Average Precision at an IoU threshold of 50 (mAP@50) of
loss in the textile garments industry due to fabric defects 0.89, demonstrating good performance in defect localization.
is not readily available. However, In a case study at Bahir The ensemble learning approach based on weighted voting
Dar Textile Share Company, it was found that the company achieved an overall classification accuracy of 90% and a
lost approximately 560,799.6 birr per year due to different balanced F1-score exceeding 84% across six defect classes.
defects/problems [2]. Furthermore, defective products can This level of accuracy signifies the effectiveness of the pro-
damage a company’s reputation, leading to a loss of customer posed methodology in identifying and classifying various
trust and market share. Automated quality guarantee of textile fabric defects. This research demonstrates the potential of
fabric materials is one of the most important and demanding deep learning and ensemble-based methods for automated
computer vision tasks in textile smart manufacturing [3]. fabric defect detection and classification and offers a promising
Traditional methods for fabric defect detection rely heavily solution for automated quality control in the textile industry.
Automating defect detection can lead to faster throughput, in- provides valuable insights into their applicability and perfor-
creased productivity, and ultimately, a more consistent product mance in real-world fabric defect detection scenarios.
with fewer defects. Another research [10] Focused on fabric defect detection
The rest of the paper is organized as follows, Section II reviews using statistical, spectral, and model-based approaches. Differ-
related works within the field. Section III details the overall ent techniques including histogram-based. Fourier transform-
methodology of this research, including dataset construction, based approaches for fabric defect detection using Markov
model selection, prediction, and evaluation metrics. Section IV random fields, FFT, KL transform, Laws filters, co-occurrence
presents the experimental results, demonstrating the effective- matrix, Sobel edge detection, fractal dimension, thresholding,
ness of our approach. Finally, Section V concludes the paper regular tape, and Gabor filters has been introduced in this
and discusses the limitations and future research scope. research. This Gabor transform offers optimal defect detection
in the spatial and frequency domain. That excels in defect
II. LITERATURE REVIEW detection for spatial and frequency domains. Also, Fourier
Machine vision-based fabric defect detection methods have transform-based methods that are effective for fabric defect
evolved over the years to extract defect-related characteris- detection have been found in this research [10]. They catego-
tics from textile images [4] [5]. Earlier approaches utilized rized methods into seven classes and evaluated them based on
grayscale statistics and histogram-based fuzzy inference to criteria such as accuracy, cost, reliability, and suitability for
identify defects sensitive to illumination changes and robust different fabric types.
to rotation and translation [6] [7]. However, these methods Fabric defect detection, plagued by complex textures and
struggled with complex image textures and specific defect limitations of traditional methods, received a boost with this
recognition. paper’s [11] novel CNN-SVM approach. Inspired by CNNs’
In this literature review, we summarized and analyzed recent success in defect detection, an improved AlexNet architec-
advancements in automated fabric defect detection methods. ture meticulously extracts deep features from fabric images.
Khan et al. [8] aimed to detect fabric defects rather than Replacing the standard Softmax classifier, an SVM leverages
classify them, performing well on single-colored fabrics but its potential to decipher defects from background textures.
potentially limited on textured or printed fabrics. Their method Tested on the TILDA dataset, the model impresses with
utilized filter-based edge detection and heuristics within MAT- 99% accuracy, surpassing traditional methods by a significant
LAB to predict defects. RGB images were preprocessed margin. However, its current inability to detect defect size
into grayscale, employing thresholding techniques to enhance hinders its real-world application. By incorporating this crucial
contrast and facilitate edge detection. Canny Edge detection information into the model through additional processing or
identified defect regions, with subsequent noise removal for architectural modifications.
accuracy. The heuristic thresholding approach achieved a 98% This review categorizes detection methods into traditional
detection rate, showcasing effectiveness on single-colored fab- (statistical, structural, spectral, model-based) and learning-
rics. based (machine learning, deep learning) algorithms. Auto-
Fabric defect detection methods can be categorized into non- mated detection improves efficiency and reduces costs, vital
motif-based and motif-based approaches [9]. Non-motif-based for Industry 4.0 adaptation [12]. The integration of artificial
methods are conventional techniques not reliant on specific intelligence (AI) in textile manufacturing aligns with Industry
motif patterns, while motif-based approaches use motifs as 4.0 principles, emphasizing interdisciplinary advancements.
fundamental units for detection, inspired by fabric texture sym- Qiang Liu introduced an improved YOLOv4 model [13], en-
metries. Seven distinct approaches exist within fabric defect hancing defect detection with a SoftPool-based SPP structure
detection: statistical, spectral, model-based, learning, struc- that boosts accuracy without compromising speed. Additional
tural, hybrid, and motif-based. Statistical methods like bi-level post-SPP convolutional layers optimize feature maps for sub-
thresholding lack robustness for complex defects. Spectral sequent processing, ensuring compatibility with PANet. Pre-
methods, such as using the Wigner distribution, analyze fabric processing techniques like contrast-limited adaptive histogram
textures’ frequency but need precise parameter tuning. Model- equalization enhance image quality, increasing mean Average
based methods, like Poisson’s model, simulate defect patterns Precision (mAP) by 6% with a negligible 2% FPS decrease.
but struggle with real-world variability. Learning approaches, This advancement underscores deep learning’s practical impact
particularly with CNNs, automate feature learning from large on defect detection beyond textiles, with the potential for
datasets, enhancing defect identification. Hybrid methods com- broader industry adoption.
bine statistical and machine learning for robust detection, and
motif-based methods exploit fabric texture symmetries for III. M ETHODOLOGY
accurate defect analysis. These approaches cater to different
textile industry applications based on their unique strengths A. Dataset
and weaknesses. A custom dataset of fabric defects was constructed, utilizing
Each method are reviewed [9] in this paper undergoes a images captured from garments with various imperfections.
qualitative analysis, evaluating detection success rates, sample The dataset comprises 2468 high-quality images manually
quantities for testing, and strengths and weaknesses as well as inspected each image to ensure quality. The dataset contains
Fig. 2. Classification Model Architecture

load pre-trained weights from the generic YOLOv8 model.


This approach capitalizes on the model’s existing feature
Fig. 1. Ground Truth Bounding (Left) Box Vs Predicted Bounding Box
(Right) by YoloV8n Obb extraction capabilities while adapting them to the specific
fabric defect classification task. YOLOv8m is a lightweight
version designed for efficient inference, making it ideal for
images of six types of fabric defects. The dataset has Button- resource-constrained scenarios. YOLOv8n-obb, on the other
hike (307 images), Broken-button (545 images), Hole (556 im- hand, is specifically designed for detecting oriented bounding
ages), Color-defect (384 images), Foreign-yarn(334 images), boxes, making it suitable for the task. Hyperparameter Tuning
and Swing-error (317 images). To address potential limitations involved A batch size of 32 images was chosen based on
arising from a finite dataset size, data augmentation techniques computational resource constraints and to balance the trade-off
were employed. This augmentation process enhanced the between training speed and convergence. The Adam optimizer
model’s ability to generalize to unseen variations in fabric with a learning rate of 0.001 and momentum=0.9 ensured
defects. The images were manually annotated with defect type optimal hyperparameter configuration for the given dataset
and bounding box information. The image annotation contains and model architecture. The model was trained for 30 epochs.
a class label, bounding box center (x,y), height, and width. The dataset was split into training, validation, and testing sets
using a 70:20:10 ratio. Metrics such as precision (P), recall
B. Data Preprocessing (R), Intersection-Over-Union (IoU) across different thresholds,
To prepare the fabric defect dataset for model training, a mAP@50 (IoU threshold of 0.5), and mAP@0.5:0.95 (average
preprocessing pipeline was implemented. This pipeline aimed precision across IoU thresholds from 0.5 to 0.95). were used
to improve model generalizability. The first step was focused to evaluate the performance of the detection model.
on standardizing image format and size. As the color might 1) Intersection over Union (IoU): This metric is used
not affect the defect type and localization, All images were in object detection tasks to measure the overlap between a
converted to grayscale for consistency. Additionally, images predicted bounding box and a ground truth bounding box as
were resized to a uniform dimension of 640x640 pixels to Eq 1.
ensure compatibility with the chosen models. Next, automatic Intersection Area
IoU = (1)
image orientation was applied to correct any inconsistencies Union Area
in image rotation. This ensured proper defect representation. Equation 1 provides the formula for calculating IoU.
To further increase model robustness to variations, data aug- 2) Mean Average Precision (mAP): This metric summa-
mentation techniques were employed. Three augmented ver- rizes the overall detection performance of an object detection
sions were generated for each training image. Augmentation model across various IoU thresholds. It takes into account both
such as Flip (Horizontal and Vertical), 90° Rotate (Clockwise, precision (P) and recall (R) at different IoU thresholds as Eq.2
Counter-Clockwise, Upside Down), Rotation (Between -15° P
APi
and +15°), Shear (±10° Horizontal, ±10° Vertical), Exposure mAP = (2)
(Between -10% and +10%), Blur (Up to 0.7px), Noise (Up to N
0.1% of pixels) were applied to each image. By incorporating Where:
P
these preprocessing steps, we aimed to create a more robust • APi : The summation of Average Precision (AP) values
and diverse dataset, ultimately leading to improved model across all considered IoU thresholds (i).
performance on unseen fabric defect images. • N : The total number of IoU thresholds used for evalua-
tion.
C. Detection Model
In this study, we explored the application of YOLOv8, D. Classification Model
a state-of-the-art object detection model, for fabric defect This research explores ensemble learning methods for fab-
detection. We utilize two YOLOv8 variants: YOLOv8m and ric defect classification. We leveraged pre-trained convolu-
YOLOv8n-obb. Both models were pre-trained on a large tional neural network (CNN) architectures, such as VGG16,
dataset of various objects and then fine-tuned on our cus- ResNet50, MobileNet, InceptionV3, and Xception. To improve
tom fabric defect dataset. Transfer learning was used to model efficiency and adapt to our specific defect categories,
we employed transfer learning by freezing the pre-trained
layers and adding custom top layers on the top of each base
model. These custom top layers consisted of several fully
connected layers with activation functions, culminating in a
final output layer with the number of neurons matching our
six defect classes. The proposed models utilize pre-trained
architecture for feature extraction, followed by a sequence of
densely connected layers for classification as Fig2. The initial
Dense layer has 256 units with a ReLU activation for non-
linearity. A Dropout layer with a rate of 0.5 is then used Fig. 3. Ensemble Prediction Method for Classification
to prevent overfitting. The model subsequently utilizes two
more Dense layers with ReLU activations, each followed by
Dropout layers with decreasing rates (0.2 and 0.1 respectively) G. Evaluation
for regularization. A Kernel Regularizer L2 (weight decay) The trained model’s performance was evaluated using pre-
with a coefficient of 0.001 was applied to the three dense layers cision, recall, F1-score, and support for each of the six defect
to prevent overfitting further. Finally, a flattened layer reshapes classes. The overall accuracy was evaluated with macro and
the data into a one-dimensional vector before feeding it into a weighted averages providing additional insights into model
Dense output layer with 6 units and a softmax activation for performance across all classes.
multi-class classification. 1) Macro Average: This metric provides an unweighted
mean of the individual class scores. It treats all classes equally,
E. Prediction regardless of the number of images per class. Mathematically,
the macro average for precision (P) is calculated as Eq4:
This paper aimed to utilize the strengths of each model using
an ensemble approach to get a final prediction. For each image PClass1 + PClass2 + . . . + PClassN
Macro Average (P) = (4)
in the dataset, predictions were generated by passing it through N
all five classification models (VGG16, ResNet50, MobileNet, where N is the total number of classes). The same formula
InceptionV3, Xception) with their custom top layers. These applies to macro average recall and F1-score.
individual predictions were then combined using the weighted 2) Weighted Average: This metric takes into account the
voting ensemble method as Fig3. This strategy leverages the class imbalance by considering the number of images (support)
historical performance of each model by assigning weights in each class. Classes with more images contribute more
based on their prediction accuracy. Models with demonstrably weight to the overall average. The weighted average precision
higher accuracy receive higher weights effectively increasing (P) is calculated as Weighted Average (P) Eq.5
their influence on the final ensemble prediction.
(S1 × P1 ) + (S2 × P2 ) + . . . + (SN × PN )
Mathematically, let P (ci |xj ) represent the predicted prob- (5)
ability of class ci for image j by model m. Additionally, let Total Support
Am denote the accuracy of model m. The weighted vote for where Total Support is the sum of images across all classes.
each class ci across all N models can be expressed as Eq.3 Similar calculations are used for weighted average recall and
(for all models m and images j): F1-score.
P Here,
(Am · P (ci |xj )m ) • 1, 2, 3, . . . , N = the class indices,
Ensemble Vote(ci ) = P (3)
Am • P = precision, and
• S = support.
F. Training and Evaluation
Analyzing both macro and weighted averages, provides in-
The model training and development was conducted on sights about the model’s performance across all defect cate-
Google Colab. This platform provided access to a powerful gories, considering both balanced and imbalanced class distri-
Tesla T4 GPU, enabling efficient training of our deep learning butions.
models. The dataset was split into training (70%), validation
(15%), and testing (15%) sets. The model was compiled IV. R ESULT AND D ISCUSSION
using the Adam optimizer and categorical cross-entropy loss The evaluation of the defect detection compared the
function, along with accuracy as the evaluation metric. The performance of two YOLOv8 variants, YOLOv8n-obb and
training process involved 30 epochs and a batch size of 32. YOLOv8m, for fabric defect detection on a custom dataset.
Validation data(Xval , yval ) was used to monitor performance Both models achieved good overall mean Average Precision
during training and prevent overfitting. To optimize the train- at an IoU threshold of 50 (mAP@50), with YOLOv8n-
ing process, we employed a ModelCheckpoint callback. This obb reaching a slightly higher value (0.889) compared to
callback monitored validation accuracy, saving the model with YOLOv8m (0.817) shown in Table I. However, a closer
the highest validation accuracy achieved during training. look reveals trade-offs between overall and class-specific
TABLE I
M ODEL C OMPARATIVE P ERFORMANCE M AP AT THRESHOLD 0.5

Model YOLOv8n-obb YOLOv8m


All 0.89 0.81
Broken button 0.96 0.98
Button hike 0.97 0.99
Color 0.87 0.82
Hole 0.87 0.69
Foreign yarn 0.85 0.79
Swing error 0.84 0.62

performance. YOLOv8n-obb excelled in detecting common


defects like Broken buttons, Button hikes, and Color defects
(mAP@50>0.89), while YOLOv8m achieved similar perfor-
mance for the first two but struggled with other classes. Both
model performance on Swing error and foreign yarn detection Fig. 4. Comparison of mAP at threshold 0.5 for YOLOv8m and YOLOv8n-
was comparatively lower (mAP@50>0.82 and mAP@50<90) obb Models on Different Class
as shown in TableII. Overall YOLOV8n-Obb performs well on
the detection tasks. This suggests that complex and unclear
defect types Swing errors and Foreign yarn require more data
to be trained on. the choice between these models depends
on application priorities. YOLOv8n-obb will be preferable for
tasks requiring balanced performance across all defect classes.
But YOLOv8m might offer a slight speed advantage during
deployment.
The ensemble method for classification achieved significant
improvements over individual models in terms of the overall
accuracy of 90% as shown in Table V and macro-averaged
metrics (precision, recall, and F1-score). Compared to the
highest-scoring individual model (ResNet50 at 0.87 F1-score),
the ensemble method reached an F1-score of 0.90, representing
a 3.4% increase, as shown in Table III. Additionally, weighted
averaging of individual model predictions further boosted the
ensemble’s performance, leading to a weighted F1-score of
0.90, shown in Table IV. Analyzing class-wise performance
revealed strong results for most defect categories, with F1-
scores exceeding 0.84 for all but the ”Hole” class (0.84 F1-
score). This suggests the ensemble effectively identifies vari-
ous clothing defects. However, further investigation is needed
Fig. 5. Normalized Confusion Matrix for YOLOv8n OBB Detection Model
to improve performance for the ”Hole” class, potentially by
incorporating additional training data or exploring alternative
ensemble learning strategies.

TABLE II
YOLOV8 N -OBB O BJECT D ETECTION P ERFORMANCE

Class Precision (P) Recall (R) mAP50 mAP50-95


All 0.898 0.848 0.889 0.626
Broken button 0.960 1.000 0.983 0.850
Button hike 0.976 0.960 0.991 0.779
Color defect 0.874 0.826 0.893 0.602
Foreign yarn 0.877 0.736 0.827 0.488
Hole 0.857 0.875 0.883 0.577
Swing error 0.842 0.692 0.760 0.461

This experiment demonstrates the effectiveness of ensemble


learning in fabric defect classification. By combining the
Fig. 6. Comparison of Weighted Average (Precision, Recall, and F1 Score)
strengths of diverse architectures and leveraging weighted for different CNN Models
voting based on individual model performance, we achieved
TABLE III TABLE V
C OMPARISON OF P ERFORMANCE M ETRICS M ACRO (P RECISION , R ECALL , C LASS - WISE P ERFORMANCE M ETRICS FOR W EIGHTED VOTING
AND F1 S CORE ) E NSEMBLE M ETHOD

Model Macro Precision Macro Recall Macro F1-Score Class Precision Recall F1-Score
VGG16 0.80 0.80 0.78 Broken-button 0.99 0.93 0.95
ResNet50 0.85 0.87 0.86 Button-hike 0.92 0.95 0.93
MobileNet 0.76 0.78 0.76 Color-defect 0.78 0.96 0.86
InceptionV3 0.56 0.55 0.54 Foreign-yarn 0.85 0.96 0.90
Xception 0.64 0.64 0.64 Hole 0.90 0.79 0.84
Ensemble 0.89 0.90 0.90 Swing-error 0.93 0.84 0.88

TABLE IV
C OMPARISON OF P ERFORMANCE M ETRICS W EIGHTED AVERAGE types and defect variations could potentially improve model
(P RECISION , R ECALL , AND F1 S CORE ) generalizability and overall performance.
For future work, we aim to explore other object detec-
Model W Precision W Avg Recall W Avg F1-Score
VGG16 0.82 0.80 0.80 tion models and alternative ensemble methods like stacking
ResNet50 0.87 0.86 0.86 or boosting to enhance classification performance further.
MobileNet 0.79 0.77 0.77 Additionally, investigating the impact of data augmentation
InceptionV3 0.56 0.56 0.55
Xception 0.66 0.65 0.65
and preprocessing techniques on individual model accuracy
Ensemble 0.91 0.90 0.90 and overall ensemble performance could be a valuable area
for further exploration. By addressing these limitations and
pursuing future work directions, This lightweight detection
a more accurate and potentially more generalizable ensemble and ensemble-based classification approach has the potential
model. to become a robust and generalizable solution for automated
fabric defect classification in the textile industry.
V. C ONCLUSION AND F UTURE W ORK
R EFERENCES
In conclusion, this research investigated the application of
pre-trained models and ensemble methods for fabric defect [1] Knitting Industry. Goodbye to fabric defects with ai fabric inspection
from pailung, Year.
detection and classification. This paper employed pre-trained
[2] Iris Publishers. Causes of woven fabric defects and cost analysis, 2021.
CNN architectures fine-tuned on a custom dataset with cus- [3] Hindawi. A real-time fault diagnosis system for digital circuits using
tom top layers and explored weighted voting for ensemble machine learning and deep learning techniques, 2021.
prediction. While the approach showed promise, the primary [4] Aqsa Rasheed, Bushra Zafar, Amina Rasheed, Nouman Ali, Muham-
mad Sajid, Saadat Hanif Dar, Usman Habib, Tehmina Shehryar, and
limitation of this study was the dataset size (around 2500 Muhammad Tariq Mahmood. Fabric defect detection using computer
images). Despite data augmentation techniques, a larger and vision techniques: a comprehensive review. Mathematical Problems in
more diverse dataset encompassing a wider range of fabric Engineering, 2020:1–24, 2020.
[5] Afshan Latif, Aqsa Rasheed, Umer Sajid, Jameel Ahmed, Nouman Ali,
Naeem Iqbal Ratyal, Bushra Zafar, Saadat Hanif Dar, Muhammad Sajid,
Tehmina Khalil, et al. Content-based image retrieval and feature extrac-
tion: a comprehensive review. Mathematical problems in engineering,
2019, 2019.
[6] Thierry Thomas and Michel Cattoen. Automatic inspection of simply
patterned material in the textile industry. In Machine Vision Applications
in Industrial Inspection II, volume 2183, pages 2–12. SPIE, 1994.
[7] Yuan Ye. Fabric defect detection using fuzzy inductive reasoning based
on image histogram statistic variables. In 2009 Sixth International
Conference on Fuzzy Systems and Knowledge Discovery, volume 6,
pages 191–194. IEEE, 2009.
[8] Md Rakibul Alam Khan and Halima Akhter. Fabric defect detection
using image processing. Global Journal of Computer Science and
Technology, 22(G1):1–7, 2022.
[9] Henry Y.T. Ngan, Grantham K.H. Pang, and Nelson H.C. Yung. Auto-
mated fabric defect detection—a review. Image and Vision Computing,
29(7):442–458, 2011.
[10] Kazım Hanbay, Muhammed Fatih Talu, and Ömer Faruk Özgüven.
Fabric defect detection systems and methods—a systematic literature
review. Optik, 127(24):11960–11973, 2016.
[11] Junhao Qiu, Yihua Hu, Jinrong Cui, Junjian Lian, Xin Liu, and Jun Ye.
Textile defect classification based on convolutional neural network and
svm. AATCC Journal of Research, 8(1 suppl):75–81, 2021.
[12] Chao Li, Jun Li, Yafei Li, Lingmin He, Xiaokang Fu, and Jingjing Chen.
Fabric defect detection in textile manufacturing: a survey of the state of
the art. Security and Communication Networks, 2021:1–13, 2021.
[13] Qiang Liu, Chuan Wang, Yusheng Li, Mingwang Gao, and Jingao Li.
Fig. 7. Confusion Matrix of Prediction using Weighted Voting Ensemble A fabric defect detection method based on deep learning. IEEE Access,
Method 10:4284–4296, 2022.

You might also like