Project Report
Project Report
Project Report
Developed by
Raushan Kumar(120ME0029),
Umesh Goud Bodiga (120ME0007),
Under the guidance of :-
Dr. J. Krishnaiah(Associate Professor)
November 2023
Certificate
I, Raushan Kumar, with Roll No: 120ME0029 and Umesh Goud Bodiga , with
Roll No: 120ME0007 hereby declare that the material presented in the Project Report
titled Development of Framework for the Fully Automated Intelligent Visual
Inspection for Quality Control OF Injection VIal. represents original work carried
out by me in the Department of Mechanical Engineering at the Indian Institute
of Information Technology Design and Manufacturing, Kurnool during the years
2023–2024. With my signature, I certify that:
• I have understood that any false claim will result in severe disciplinary action.
• I have understood that the work may be screened for any form of academic
misconduct.
In my capacity as supervisor of the above-mentioned work, I certify that the work presented
in this Report is carried out under my supervision, and is worthy of consideration for the
requirements of B.Tech. Project work.
i
Abstract
This thesis introduces an innovative approach to quality control in pharmaceutical
manufacturing through the development and deployment of an Automated Intelligent
Visual Inspection Framework. Leveraging the YOLOv5 algorithm, the model achieves an
accuracy of approximately 85
The framework extends beyond mere defect recognition, aiming to revolutionize quality
control processes across diverse industries. By leveraging state-of-the-art computer vision
techniques and machine learning algorithms, the system detects defects, anomalies, or
deviations in products, components, or materials with unparalleled accuracy and efficiency.
First and foremost, I would like to express my deepest appreciation to [Dr. J. Krishnaiah],
my thesis advisor, for their guidance, expertise, and continuous encouragement. Their
insights and mentorship have been instrumental in shaping the direction of this project.
I extend my thanks to the members of the Mechanical Engineering for their support,
collaboration, and valuable feedback during the various stages of this research. The
collaborative environment provided an enriching experience that significantly contributed
to the project’s success.
Special thanks are due to Umesh Goud Bodiga , whose dedication and collaborative spirit
fostered a positive working atmosphere. The collective effort of the team played a crucial
role in overcoming challenges and achieving the project’s goals.
I would like to acknowledge the support received from [Company or Institution Name],
which provided access to resources, data, and facilities essential for the project’s
implementation and success.
This thesis stands as a testament to the collaborative spirit and collective effort of all those
mentioned above. Each contribution, no matter how small, has played a significant role
in bringing this project to fruition.
Thank you.
iii
Contents
Certificate i
Abstract ii
Acknowledgements iii
Contents iv
List of Figures vi
Abbreviations viii
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Scope and Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Practical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Literature Review 7
2.1 Evolution of Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Previous research on automated visual inspection . . . . . . . . . . 7
2.1.2 Advances in AI, ML, and Computer Vision . . . . . . . . . . . . . . 8
2.2 Automation in Pharmaceutical Manufacturing . . . . . . . . . . . . . . . . . 8
2.2.1 Industry Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Previous Research on Automated Visual Inspection . . . . . . . . . 9
2.2.3 Challenges and Considerations . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Theoretical Underpinnings 11
3.1 Theoretical Underpinnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 Principles of Automated Visual Inspection . . . . . . . . . . . . . . . 11
3.1.2 Technologies Utilized . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
iv
Contents v
4 Methodolgy 14
4.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.1 Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.2 Image Quantity and Quality . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.1 Image Resizing and Standardization . . . . . . . . . . . . . . . . . . 15
4.2.2 Data Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.3 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Model Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3.1 Backbone Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.2 Neck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.3 Detection Head . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Practical Implementation 19
5.1 User-Centric Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.1.1 Understanding User Needs . . . . . . . . . . . . . . . . . . . . . . . 19
5.1.2 Accessibility and Usability . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 Automated Image Recognition and Analysis . . . . . . . . . . . . . . . . . . 20
6 Impact Assessment 22
6.1 Efficiency Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.1 Analysis of Response Times . . . . . . . . . . . . . . . . . . . . . . 23
6.1.2 Throughput Evaluation: . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 Cost-effectiveness Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Conclusion 25
7.1 Framework Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.1.1 Practical Implementation . . . . . . . . . . . . . . . . . . . . . . . . 26
7.2 Contributions to the Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.4 Flexsim Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.5 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Bibliography 38
List of Figures
vi
List of Tables
vii
Abbreviations
ML Machine Learning
CV Computer Vision
F1 F1 Score
YOLOv5 You Only Look Once version 5
DL Deep LearningE
ROI Region of Interest
GPU Graphics Processing Unit
API Application Programming Interface
CNN Convolutional Neural Network
DSP Digital Signal Processing
GUI Graphical User Interface
IoT Internet of Things
QC Quality Control
PID Proportional-Integral-Derivative
SOP Standard Operating Procedure
viii
Chapter 1
Introduction
1.1 Background
The motivation behind this research stems from the critical role of quality control in
pharmaceuticals. Ensuring the safety, efficacy, and compliance of products is paramount.
The shortcomings of manual processes, coupled with the opportunities presented by
advancements in AI, ML, and Computer Vision, drive our quest to revolutionize visual
inspection in pharmaceutical manufacturing.
Seamless Precision: Develop a framework that ensures flawless quality control through
interconnected automation and accuracy. Real-time Validation: Implement a system
that transforms pharmaceutical quality control with swift, interconnected decisions at
every step. Innovation in Defect Detection: Revolutionize the detection of defects by
intertwining innovation and manufacturing for meticulous identification. Unified
1
Chapter 1. Introduction 2
The significance of this study lies in its potential to redefine the landscape of
pharmaceutical quality control. The proposed framework not only addresses existing
challenges but sets new standards for efficiency, cost-effectiveness, and operator
independence. The seamless integration of AI and machine learning promises to usher in
a new era of precision and excellence in pharmaceutical manufacturing.
This thesis is structured to delve into the framework development, its implementation,
results, and the broader implications for the field of pharmaceutical manufacturing. Each
section contributes to the narrative of innovation, from the theoretical underpinnings to
the practical outcomes.
1.1.1 Motivation
The motivation behind this research stems from the critical role of quality control in the
pharmaceutical industry. Ensuring the safety, efficacy, and compliance of pharmaceutical
products is not only a regulatory requirement but a fundamental aspect of building trust
with consumers and healthcare professionals. As the demand for pharmaceutical products
continues to rise, the shortcomings of traditional manual inspection processes become more
apparent.
Shortcomings of Manual Processes Manual inspection, once the gold standard, faces
challenges that hinder its effectiveness in modern pharmaceutical manufacturing. Human
subjectivity introduces the potential for errors, and the manual process struggles to
adapt to the dynamic and high-speed nature of contemporary production lines. These
challenges underscore the need for innovative and automated approaches to quality
control.
In the face of these challenges, the motivation for this research is clear: to revolutionize
the way pharmaceutical quality control is conducted. By leveraging advancements in
technology, particularly in the realms of Artificial Intelligence (AI), Machine Learning
(ML), and Computer Vision, this research aims to address the limitations of manual
inspection processes and propel pharmaceutical quality control into a new era of
precision, efficiency, and adaptability.
This motivation is not merely theoretical but responds to the practical demands of an
industry that must keep pace with advancements in science and technology. The proposed
framework seeks to not only rectify current challenges but also anticipate and meet the
evolving needs of pharmaceutical manufacturing in the years to come.
1.1 Design Principles The design principles underpinning the framework are crucial to its
success. These encompass considerations such as user interface design, system scalability,
and adaptability to diverse pharmaceutical production environments.
1.2 Algorithmic Foundations The core algorithms driving the automated visual inspection
play a pivotal role. This section delves into the selection and implementation of algorithms,
with a focus on optimizing accuracy, efficiency, and adaptability.
2.1 System Integration The seamless integration of the framework into existing
pharmaceutical manufacturing processes is a critical aspect. This involves interfacing
with other production line components and ensuring minimal disruption to workflow.
3. Impact Assessment An integral part of the thesis involves assessing the impact of
the automated visual inspection framework on quality control in pharmaceuticals. This
evaluation goes beyond theoretical efficacy to quantitative measures of efficiency, cost-
effectiveness, and overall system performance in a production environment.
Chapter 1. Introduction 5
3.1 Efficiency Measures Efficiency is assessed in terms of the speed and accuracy with which
the framework conducts visual inspections. Metrics such as throughput and response times
provide insights into the system’s efficiency.
The user interface is designed with simplicity and clarity in mind. Operators can easily
configure inspections, adjusting parameters as needed, and interpret results without the
need for extensive training.
1.3 Real-time Monitoring Operators have access to real-time monitoring features, allowing
them to track the progress of inspections as they occur. Immediate feedback enables quick
decision-making and facilitates timely corrective actions.
2. Automated Image Recognition and Analysis Minimizing the impact of human error in
quality control is a central focus of the practical implementation. The automated image
recognition and analysis components of the framework leverage advanced algorithms to
ensure consistent and reliable defect recognition.
2.1 Defect Recognition The system employs state-of-the-art computer vision techniques
to accurately identify defects in injection vials. This includes the detection of anomalies
related to liquid levels, bottle orientation, and cap placement.
Chapter 1. Introduction 6
2.2 Consistency and Reliability Automation brings a level of consistency and reliability to
the inspection process that surpasses manual methods. By reducing reliance on manual
expertise, the system enhances the overall quality control efficiency.
2.3 Immediate Analysis and Feedback One of the key advantages is the ability to provide
immediate analysis and feedback on inspection results. This feature allows for swift
decision-making, enabling timely corrective actions to address any identified issues.
3.1 Traceability The system maintains detailed records of inspection results, providing
traceability for each inspected injection vial. This traceability ensures accountability and
facilitates the identification of trends over time.
Literature Review
Manual inspection methods, once the cornerstone of quality control, have played a vital role
in ensuring product integrity. Human inspectors meticulously examined pharmaceutical
products for defects, relying on visual acuity and experience. However, this approach has
inherent limitations, including subjectivity, fatigue, and scalability challenges.
7
Chapter 2. Literature Review 8
Artificial Intelligence (AI) Artificial Intelligence (AI) is a branch of computer science that
focuses on creating systems capable of performing tasks that typically require human
intelligence. In the context of quality control, AI systems can analyze data, learn from it,
and make informed decisions, offering a transformative approach to traditional processes.
Machine Learning (ML) Machine Learning (ML), a subset of AI, empowers systems to learn
and improve from experience without being explicitly programmed. ML algorithms, when
applied to quality control, can enhance accuracy and efficiency by continuously refining
their models based on new data.
Computer Vision Computer Vision is a field within AI that enables machines to interpret
and make decisions based on visual data. In quality control, computer vision systems can
analyze images or videos of pharmaceutical products, identifying defects and anomalies
with high precision.
Robotics and Process Automation The integration of robotics and process automation has
become a hallmark of modern pharmaceutical manufacturing. Automated robotic systems
play a crucial role in tasks ranging from drug formulation and dispensing to packaging,
reducing human intervention and enhancing consistency.
Automated Visual Inspection Systems Automated Visual Inspection (AVI) systems have
emerged as a focal point in pharmaceutical manufacturing. These systems leverage
advanced technologies such as computer vision and machine learning to meticulously
inspect products for defects, ensuring a level of precision unattainable through manual
inspection.
Benefits of Automation Previous research indicates several key benefits associated with
the automation of visual inspection in pharmaceutical manufacturing. These include
heightened inspection speed, improved accuracy, and the ability to handle large volumes
of products without compromising quality.
Integration with Existing Processes The seamless integration of automated systems into
existing manufacturing processes is a critical consideration. Compatibility, adaptability,
and minimal disruption to workflow are key factors for successful implementation.
Cost Implications While automation promises enhanced efficiency, the initial investment
and maintenance costs associated with automated systems must be carefully evaluated.
Chapter 2. Literature Review 10
Smart Manufacturing and Industry 4.0 The concept of Smart Manufacturing, aligned
with Industry 4.0 principles, envisions a connected, data-driven, and highly automated
manufacturing environment. Future research in automation is likely to explore the
integration of advanced technologies such as the Internet of Things (IoT) for real-time
monitoring and decision-making.
Theoretical Underpinnings
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ultricies lacinia euismod.
Nam tempus risus in dolor rhoncus in interdum enim tincidunt. Donec vel nunc neque.
In condimentum ullamcorper quam non consequat. Fusce sagittis tempor feugiat. Fusce
magna erat, moles
3.2 Core Principles 3.2.1 Accuracy in Object Recognition At the heart of AVI is the ability
to accurately recognize and localize objects of interest within images. Leveraging state-of-
the-art deep learning algorithms, particularly the YOLOv5 model, our framework excels
in precisely identifying injection vials and relevant attributes such as liquid levels, bottle
orientation, and cap placement.
11
Chapter 3. Theoretical Underpinnings 12
3.3 Key Components 3.3.1 Image Annotation The accuracy of AVI hinges on meticulously
annotated datasets. During the model training phase, images are annotated with metadata
detailing characteristics like liquid level, bottle orientation, and cap placement. This
annotation process is crucial for teaching the model to recognize specific attributes.
3.3.2 YOLOv5 Algorithm The YOLOv5 algorithm serves as the cornerstone of our object
detection model. Its ability to detect objects in real-time, coupled with high precision
and recall rates, positions it as a robust solution for automated visual inspection in the
pharmaceutical manufacturing domain.
3.4 Integration of Machine Learning and Computer Vision The synergy between machine
learning and computer vision is evident in the framework. The model’s ability to learn from
annotated datasets and make predictions based on real-time image inputs is a testament
to the harmonious integration of these technologies.
The development of the automated intelligent visual inspection framework involved the
utilization of cutting-edge software technologies to enhance efficiency and accuracy. Key
software components include:
4.1.1 TensorFlow TensorFlow served as the foundational deep learning library for model
training. Its flexibility and scalability were instrumental in implementing the YOLOv5
algorithm, ensuring robust performance in object detection tasks.
Chapter 3. Theoretical Underpinnings 13
4.1.2 OpenCV OpenCV, an open-source computer vision library, played a pivotal role in
various computer vision tasks. From image preprocessing to real-time processing, OpenCV
contributed to the seamless integration of the visual inspection system.
4.1.3 Matplotlib and NumPy Matplotlib and NumPy were employed for data
visualization and numerical operations, respectively. These libraries provided tools for
insightful analysis and representation of training and validation results.
4.1.4 Pandas Pandas, a data manipulation library, facilitated the organization and
manipulation of datasets. Its capabilities were crucial in preparing annotated image
datasets for model training.
User Interface (UI): An intuitive interface designed for operators and quality control
personnel. This facilitates easy configuration of inspections, monitoring of results, and
reviewing inspection history.
Communication Infrastructure: Enabling real-time data transfer between the UI, image
recognition module, and the automated inspection framework. This ensures prompt
decision-making based on inspection results.
Chapter 4
Methodolgy
Data Collection is crucial for Machine Learning and Deep Learning model development,
involves using a camera app to capture diverse images, representing both defects and
defect-free scenarios. dataset supports building a robust defect detection model, ensuring
accuracy and effectiveness.
Obtain images of injection bottles using various sources such as cameras placed in
manufacturing lines, specialized imaging devices, or manual capturing methods. Ensure
the images cover different angles, lighting conditions, and variations in bottle types,
defects, labels, and sizes.
Data preprocessing is a critical step in preparing image data for deep learning tasks.
Properly processed and standardized data ensures that the model can learn effectively,
generalize well to new examples, and perform accurately during inference on unseen
images.Data preprocessing for image data involves a series of steps to prepare and clean
the images before feeding them into a deep learning model for training. Here’s a detailed
explanation of the key processes involved:
Resize images to a uniform size to ensure consistency across the dataset. Most deep
learning models expect inputs of the same dimensions. Resizing also helps reduce
computational complexity.Commonly used image sizes for training include 224x224,
256x256, or 512x512 pixels, depending on the model architecture and dataset
characteristics.
Chapter 4. Methodolgy 16
Data annotation includes adding metadata or labels, crucial for computer vision and
object detection. In this process, characteristics like liquid level, bottle orientation, and
cap placement are labeled based on image configurations.Handle any metadata
associated with the images, such as timestamps, camera settings, or annotations,
ensuring it aligns appropriately with the images for future reference or analysis.
Rotation: Rotating images by a certain degree (e.g., 90, 180 degrees) to simulate different
angles of viewing.
Flipping: Horizontally or vertically flipping images to create mirror versions, which helps
in scenarios where orientation doesn’t affect the interpretation of the image (e.g., objects
like cups, bottles).
Zooming: Enlarging or shrinking specific sections of an image. This helps the model to
focus on various scales of features.
Adding Noise: Introducing random noise, such as Gaussian noise, to simulate imperfections
or variations in real-world data.
Color Jitter: Slight changes in color, brightness, or contrast to simulate different lighting
conditions or color variations.
Shearing: Applying shearing transformations that slant or distort the image, useful for
handling perspective changes.
Model is configured with YOLOv5, a leading deep learning algorithm for object detection,
the model enhances accuracy and speed in real-time object detection tasks. Building on the
success of its predecessors, YOLOv5. YOLO (You Only Look Once) is an object detection
algorithm family known for its speed and accuracy. YOLOv5 is part of this family, an
iteration and improvement upon previous versions like YOLOv1, YOLOv2, YOLOv3, and
YOLOv4. YOLOv5 was developed by Ultralytics and released in mid-2020.
4.3.2 Neck
YOLOv5 has a neck module that combines features from different scales to improve the
model’s ability to detect objects of various sizes. This helps in handling both small and
large objects in an image.YOLOv5 incorporates a feature pyramid network to capture
object information at different scales. This is crucial for detecting objects of various sizes
in an image.
YOLO Head: The detection head is responsible for generating bounding boxes, class
predictions, and confidence scores. Utilizes anchor boxes to predict object bounding boxes
at different scales and aspect ratios. Employs multi-scale predictions to handle objects
of various sizes within an image.It predicts multiple bounding boxes per grid cell, each
associated with a specific class probability. YOLOv5 predicts bounding box coordinates
(x, y, width, height), objectness score (confidence that there is an object in the box), and
class probabilities for each box.
YOLOv5 is built using the PyTorch framework, which makes it convenient for researchers
and practitioners to work with and customize the architecture. The model is trained on
large datasets and has demonstrated competitive performance in terms of accuracy and
speed for real-time object detection tasks.
Chapter 5
Practical Implementation
Before delving into the design process, it was imperative to understand the needs,
challenges, and expectations of the end-users—operators and quality control personnel in
pharmaceutical manufacturing. Through stakeholder interviews, surveys, and on-site
observations, we gained valuable insights into the workflow and identified pain points in
the existing manual inspection processes.
19
Chapter 5. Practical Implementation 20
interface that allows users to seamlessly configure inspections, monitor results, and
review inspection history.
Recognizing the diverse skill sets and backgrounds of potential users, accessibility and
usability were prioritized throughout the design process. The interface features clear
navigation, straightforward controls, and contextual help to ensure that users, regardless
of their technical expertise, can interact effortlessly with the system.
5.6 Iterative Design and User Feedback The design process followed an iterative model,
incorporating user feedback at key stages. Prototypes were presented to a representative
group of users, and their insights were invaluable in refining the interface. This iterative
approach resulted in a design that aligns seamlessly with user expectations and operational
requirements.
5.2.1 Introduction The core of our framework lies in the implementation of automated
image recognition and analysis—a sophisticated process that leverages state-of-the-art
machine learning and computer vision techniques. This section delves into the
methodologies, algorithms, and technologies employed to achieve seamless and accurate
visual inspection of injection vials.
renowned for its efficiency in real-time object detection tasks, making it well-suited for the
dynamic environment of pharmaceutical manufacturing.
5.2.3 Dataset Annotation and Preparation The foundation of any machine learning model
is the quality of the dataset. In this project, meticulous annotation of images in the
dataset was carried out, precisely labeling objects of interest such as injection vials, liquid
levels, bottle orientations, and cap placements. The dataset, comprising diverse scenarios,
ensures the robustness and versatility of the trained model.
5.2.4 Training the Model Using the annotated dataset, the YOLOv5 algorithm underwent
a rigorous training process. The model was exposed to a plethora of images, learning to
recognize and accurately localize objects of interest within them. The accuracy of the
training model was assessed using a metric that combines precision and recall, resulting
in an impressive accuracy rate of approximately 85
5.2.5 Integration into the Production Line The true test of the automated image
recognition model lies in its real-world application. In pharmaceutical manufacturing,
the trained model was seamlessly integrated into the production line. This involved
connecting the model to cameras strategically positioned along the line, allowing for the
automatic inspection and validation of injection vials.
Chapter 6
Impact Assessment
Speed Assessment: The speed of the automated image recognition system is crucial for real-
time applications. We assess the speed of the system in processing images and providing
prompt results, ensuring it meets the demands of a dynamic pharmaceutical production
environment.
Accuracy Analysis: Accuracy is paramount in quality control. We delve into the accuracy
of the system by examining its ability to correctly identify and validate injection vials,
liquid levels, and other critical attributes. Precision, recall, and F1 score are key metrics
considered.
22
Chapter 6. Impact Assessment 23
Response times play a crucial role in the practical implementation of the framework. This
analysis focuses on the time it takes for the system to capture, process, and respond to
images, providing insights into its real-time capabilities..
Throughput is a measure of the system’s capacity to handle a high volume of items within
a given timeframe. We analyze the throughput of the framework in a production line
context, assessing its ability to maintain efficiency during continuous operation.
6.2.1 Initial Implementation Costs and Maintenance Expenses An integral aspect of any
technological solution is its cost-effectiveness. This section breaks down the initial
implementation costs, considering factors such as hardware, software, and training.
Additionally, ongoing maintenance expenses are scrutinized to provide a comprehensive
understanding of the economic feasibility of the framework.
6.2.2 Return on Investment Considerations Beyond upfront costs, we delve into the return
on investment (ROI) considerations. Assessing the economic benefits against the costs
incurred, this analysis aims to provide stakeholders with a clear understanding of the
long-term financial impact of implementing the automated visual inspection framework.
Conclusion
User-Centric Design The emphasis on a user-friendly interface ensures that operators and
quality control personnel can easily configure inspections, monitor results in real-time,
and review inspection history. This user-centric approach enhances overall usability and
adoption within the production environment.
25
Chapter 7. Conclusion 26
improvement.
User-Centric Design in Action Operators and quality control personnel have successfully
navigated the intuitive interface, configuring inspections, and leveraging real-time
monitoring features. The practical implementation affirms the effectiveness of the
user-centric design in enhancing overall workflow efficiency.
Documentation for Traceability The documentation and reporting system, when put into
practice, has facilitated traceability and data-driven decision-making. Stakeholders have
utilized the documented results to make informed decisions, contributing to a culture of
continuous improvement.
Figure 7.2: Shape and edge live detection through web cam
The contributions of this research extend beyond the confines of a singular project,
resonating with broader implications for the field of pharmaceutical manufacturing and
quality control. The development and successful implementation of the automated
intelligent visual inspection framework mark a pivotal advancement, setting new
standards for efficiency, reliability, and adaptability in the industry. The user-centric
design, integrating advanced technologies such as AI, ML, and computer vision, not only
enhances the precision of defect detection but also transforms the traditional landscape
Chapter 7. Conclusion 28
Moreover, the model developed as part of this research represents a notable contribution
to the repertoire of automated visual inspection tools. Its architecture, designed with
careful consideration of industry-specific needs, showcases the potential of AI and ML in
revolutionizing defect detection processes. The integration of this model into the
pharmaceutical manufacturing production line signifies a paradigm shift towards
real-time, automated decision-making, thereby addressing challenges associated with
traditional manual inspection methods. This paradigm shift contributes to the ongoing
evolution of pharmaceutical manufacturing practices, aligning with the industry’s pursuit
of enhanced quality, safety, and compliance.
7.3 Results
While celebrating the current achievements, there are avenues for future exploration and
enhancement. Opportunities for refining algorithms, expanding the scope of defect
detection, and integrating emerging technologies should be considered for the continued
evolution of the framework.
In conclusion, the framework’s development and practical implementation have not only
met but exceeded expectations. The achievements underscore the transformative impact
of advanced technologies on quality control in pharmaceutical manufacturing, paving the
way for a more efficient, reliable, and future-ready industry.
Chapter 7. Conclusion 30
A.1 Overview This section provides an in-depth exploration of the architecture of the
automated visual inspection model developed for pharmaceutical manufacturing quality
control. The model’s design encompasses advanced machine learning and computer vision
techniques to achieve robust defect detection capabilities.
A.2 Model Components A.2.1 Input Layer The model begins with an input layer that
processes high-resolution images of injection vials captured in real-time. This layer serves
as the foundation for subsequent feature extraction.
A.2.2 Feature Extraction Layers A series of convolutional layers follow the input layer to
extract hierarchical features from the input images. These layers are crucial for learning
intricate patterns and representations that contribute to accurate defect identification.
A.2.3 Classification Layer The extracted features are then fed into a classification layer
responsible for distinguishing between different classes, including defect categories and
acceptable states. The model is trained to assign appropriate labels based on the learned
features.
A.2.4 Output Layer The final layer produces probability scores for each class, indicating
the likelihood of a given injection vial belonging to a specific category. The class with the
highest probability is considered the model’s prediction for the input.
31
Appendix A. Model Architecture Details 32
A.4 Training Process The model underwent training on a meticulously annotated dataset,
comprising images of injection vials with precise labels for defect types and acceptable
states. The training process involved minimizing a defined loss function to enhance the
model’s ability to generalize and make accurate predictions on unseen data.
A.5 Performance Metrics During evaluation, the model’s performance was assessed using
industry-standard metrics, including precision, recall, and accuracy. These metrics provide
quantitative insights into the model’s effectiveness in identifying defects and ensuring
reliable quality control.
This detailed exploration of the model’s architecture aims to provide transparency and
clarity regarding the underlying framework that powers the automated visual inspection
system.
Appendix B
B.1 Experimental Setup B.1.1 Dataset The experiments were conducted on a carefully
curated dataset comprising high-resolution images of injection vials. The dataset included
a diverse range of scenarios, encompassing different defect types and acceptable states.
B.1.2 Training Configuration The model was trained using a machine learning pipeline
that involved preprocessing the images, configuring hyperparameters, and optimizing the
training process. Details on data augmentation, batch sizes, and training epochs are
provided in this section.
B.2 Performance Metrics B.2.1 Precision, Recall, and Accuracy The model’s performance
was evaluated using precision, recall, and accuracy metrics. Precision represents the ratio
of correctly identified defects to the total predicted defects. Recall measures the ratio
of correctly identified defects to the total actual defects. Accuracy provides an overall
assessment of the model’s correct predictions.
B.2.2 F1 Score The F1 score, a harmonic mean of precision and recall, provides a balanced
measure of the model’s effectiveness in defect detection.
B.3 Experimental Results B.3.1 Model Validation The model underwent rigorous validation
on a separate set of images not used during training. This section presents the validation
results, including confusion matrices and visual representations of the model’s predictions.
33
Appendix B. Experimental Data and Results 34
B.6 Additional Data Supplementary data, charts, or graphs that provide further insights
into the experimental process and results are included in this section.
Appendix C
C.1.1 Libraries and Dependencies The model implementation relied on several libraries
for deep learning, computer vision, and data manipulation. The primary dependencies
included TensorFlow for deep learning, OpenCV for image processing, NumPy for
numerical operations, and Matplotlib for visualization.
35
Appendix C. Snippets and Implementation Details 36
The code snippets below provide a concise representation of the model architecture. This
includes the definition of layers, activation functions, and the compilation of the model.
C.2.1 Image Loading and Augmentation The following code snippets demonstrate how
images were loaded into the model and underwent augmentation for improved
generalization.
Appendix C. Snippets and Implementation Details 37
C.3 Model Training C.3.1 Training Loop The code snippets below illustrate the training
loop, including the iteration through epochs and the evaluation of model performance.
C.4 Real-world Deployment C.4.1 Integration with Production Line Cameras The following
code outlines the integration of the trained model with cameras positioned along the
pharmaceutical manufacturing production line.
Bibliography
38
Bibliography 39
[11] S.-J. Burgdorf, T. Roddelkopf, and K. Thurow, “An optical approach for cell pellet
detection,” SLAS Technology, vol. 28, no. 1, pp. 32–42, 2023. [Online]. Available:
https://doi.org/10.1016/j.slast.2022.11.001
[12] J. Jiang, G. Cao, A. Butterworth, T.-T. Do, and S. Luo, “Where shall i
touch? vision-guided tactile poking for transparent object grasping,” IEEE/ASME
Transactions on Mechatronics, vol. 28, no. 1, pp. 233–244, 2023. [Online]. Available:
https://doi.org/10.1109/TMECH.2022.3201057