0% found this document useful (0 votes)
27 views28 pages

Sign Language RECOGNITION USING DEEP LEARNING

Sign Language Recognition using Deep Learning: Introduction Sign language is a vital means of communication for the deaf and hard of hearing community. However, it can be challenging for those who are not familiar with sign language to understand and communicate with sign language users. Deep learning-based sign language recognition systems can help bridge this gap. Deep Learning Architecture A typical deep learning architecture for sign language recognition consists of the following compo

Uploaded by

Vikram Kaipeng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views28 pages

Sign Language RECOGNITION USING DEEP LEARNING

Sign Language Recognition using Deep Learning: Introduction Sign language is a vital means of communication for the deaf and hard of hearing community. However, it can be challenging for those who are not familiar with sign language to understand and communicate with sign language users. Deep learning-based sign language recognition systems can help bridge this gap. Deep Learning Architecture A typical deep learning architecture for sign language recognition consists of the following compo

Uploaded by

Vikram Kaipeng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

TRIPURA INSTITUTE OF

TECHNOLOGY

SIGN LANGUAGE
RECOGNITION USING
DEEP LEARNING
Sign Language
Recognition using Deep
Learning
PROJECT
MEMBERS
Nandan Das(2064040019) Guided By:
Sonia Das(2064040008) Prof. Jhunu Debbarma
David Malsom(2064040015)
Sandip Chakraborty(27/CS/L/23/69)
Contents
• Introduction
• Problem Statement
• Objective
• Feasibility Study
• Methodology
• Tools and Dependencies
• Steps to Build SLR
Contents
• Results
• Challenges
• Future Scope
• Conclusion
Introduction
• Sign language is a critical communication tool for the
deaf and hard-of-hearing community.
• The ability to recognize and interpret sign language
through technology can bridge communication gaps and
foster inclusivity.
• This presentation covers a project on sign language
recognition using deep learning , highlighting the
methodology, tools, results, challenges, and future
scope.
Problem Statement
• Traditional methods of learning and interpreting sign
language are often resource-intensive and time-
consuming.
• There is a need for an automated system that can
accurately recognize and interpret sign language in
real-time, facilitating easier communication and
learning.
Objective
• To build and automated system using computer vision
and deep learning techniques that is robust and
accurately recognize and interpret sign language in
real-time, facilitating easier communication and
learning for the deaf and hear-of hearing community
Feasibility Study
• Technical Feasibility
o Hardware: Availability of affordable high-resolution
cameras.
o Software: Robust deep learning frameworks(TensorFlow,
Keras).
o Data: Accessibility of sign language datasets for training.

• Economic Feasibility
o Cost-effective hardware and open-source software reduce
development costs.
o Potential for high return on investment by addressing a
significant communication barrier.
Feasibility Study
• Operational Feasibility
o User-friendly interface for both deaf individuals and those
learning sign language
o Real-time performance is achievable with current technology.
Methodolog
y
Methodology
• Data Collection
o Using computer vision to capture hand gestures.
o Collecting images of different signs under varied conditions.

• Preprocessing
o Normalizing and resizing images.
o Segmenting hand regions from the background.
Methodology
• Model Training
o Using a web tool (Teachable Machine) for feature extraction.
o Training models on labeled sign language datasets.

• Real-time Testing
o Implementing a hand detector and classifier to recognize
signs form live video feed.
Tools and Dependencies
• Programming Language: Python
• Libraries: OpenCV, NumPy, TensorFlow, cvzone
• Hardware: Webcam or built-in camera
• Others: HandDetector form cvzone, Classifier form
cvzone
Python : Python is an interpreted, objected-oriented, high-level
programming language with dynamic semantic. Its high-level built
in data structures, combined with dynamic typing and dynamic
binding, make it very attractive for Rapid Application
Development, as well as for use as a scripting or glue language to
connect existing components together.

OpenCV: OpenCV is a huge open-source library for computer


vision, machine learning, and image processing. It plays a major
role in real-time operation which is very important in today's
systems. By using it, one can process images and videos to
identify objects, faces, or even the handwriting of a human
Numpy: NumPy is a python library used for working with arrays. It
stands for Numerical Python

Cvzone: cvzone is a Computer vision package that makes it easy


to run Image processing and AI functions. At the core it uses
OpenCV and Mediapipe libraries.

TensorFlow: TensorFlow is an open-source library developed by


Google primarily for deep learning applications. It also supports
traditional machine learning.
Steps to Build SLR
Three important steps are involved to build a Sign
Language Recognition system
1. Data Collection
2. Model Training
3. Real-time Testing
Data Collection
• Starts by importing the necessary dependencies

• OpenCV along with Webcam is used to collect the data.


• cvzone Hand detector is instantiated to detect the hand
signs. The images are then saved in a folder.
• Each hand signs has its own folder. The image size is set
at 300.
• For each hand sign 100-200 images are taken
Model Training
• Here we used a web based tool (Teachable Machine) by
google to train the model.
• The collected is first uploaded in the website and labeled
accordingly.
• The website then used a pre-trained keras model to train on
the data collected.
• The trained model is then imported into the project folder
along with the label folder.
Real-time Testing
• Now we will capture and classify hand gestures in real-time. The
following components are used for the purpose: Webcam, Hand
detector, Classifier, Image Display
• We first initialized and capture frames from the webcam.
• The hands in each frame is detected using the hand detection
model.
• The detected hand region is preprocessed and the hand gesture
is classified using the pre-trained model.
• The classification result is Displayed on the video feed.
Results
• Successfully created a dataset of sign language images.
• Real-time recognition system achieved high accuracy
in controlled environments.
• Demonstrated the system's ability and classify different
signs with minimal delay.
Challenges
• Data Variability: Ensuring diverse and representative
data for training.
• Real-time Performance: Balancing accuracy with
speed.
• Environmental Factors: Handling different
lighting conditions and backgrounds.
• Generalization: Ensuring the model performs well
across different users.
Future Scope
• Improved Accuracy: Utilizing more advanced models
and larger datasets.
• Mobile Integration: Developing mobile applications
for broader accessibility.
• Multi-language Support: Expanding to recognize sign
languages from different regions.
• Gesture Sequences: Extending the system to
recognize full sentences and phrases
Conclusion
• Deep learning offers a promising approach to sign
language recognition enabling real-time, accurate
interpretation of hand gestures.
• This project demonstrates the feasibility and potential of
such systems to enhance communications and
accessibility of the deaf and hard-of-hearing community.
Thank
You

You might also like