Sign Language RECOGNITION USING DEEP LEARNING
Sign Language RECOGNITION USING DEEP LEARNING
TECHNOLOGY
SIGN LANGUAGE
RECOGNITION USING
DEEP LEARNING
Sign Language
Recognition using Deep
Learning
PROJECT
MEMBERS
Nandan Das(2064040019) Guided By:
Sonia Das(2064040008) Prof. Jhunu Debbarma
David Malsom(2064040015)
Sandip Chakraborty(27/CS/L/23/69)
Contents
• Introduction
• Problem Statement
• Objective
• Feasibility Study
• Methodology
• Tools and Dependencies
• Steps to Build SLR
Contents
• Results
• Challenges
• Future Scope
• Conclusion
Introduction
• Sign language is a critical communication tool for the
deaf and hard-of-hearing community.
• The ability to recognize and interpret sign language
through technology can bridge communication gaps and
foster inclusivity.
• This presentation covers a project on sign language
recognition using deep learning , highlighting the
methodology, tools, results, challenges, and future
scope.
Problem Statement
• Traditional methods of learning and interpreting sign
language are often resource-intensive and time-
consuming.
• There is a need for an automated system that can
accurately recognize and interpret sign language in
real-time, facilitating easier communication and
learning.
Objective
• To build and automated system using computer vision
and deep learning techniques that is robust and
accurately recognize and interpret sign language in
real-time, facilitating easier communication and
learning for the deaf and hear-of hearing community
Feasibility Study
• Technical Feasibility
o Hardware: Availability of affordable high-resolution
cameras.
o Software: Robust deep learning frameworks(TensorFlow,
Keras).
o Data: Accessibility of sign language datasets for training.
• Economic Feasibility
o Cost-effective hardware and open-source software reduce
development costs.
o Potential for high return on investment by addressing a
significant communication barrier.
Feasibility Study
• Operational Feasibility
o User-friendly interface for both deaf individuals and those
learning sign language
o Real-time performance is achievable with current technology.
Methodolog
y
Methodology
• Data Collection
o Using computer vision to capture hand gestures.
o Collecting images of different signs under varied conditions.
• Preprocessing
o Normalizing and resizing images.
o Segmenting hand regions from the background.
Methodology
• Model Training
o Using a web tool (Teachable Machine) for feature extraction.
o Training models on labeled sign language datasets.
• Real-time Testing
o Implementing a hand detector and classifier to recognize
signs form live video feed.
Tools and Dependencies
• Programming Language: Python
• Libraries: OpenCV, NumPy, TensorFlow, cvzone
• Hardware: Webcam or built-in camera
• Others: HandDetector form cvzone, Classifier form
cvzone
Python : Python is an interpreted, objected-oriented, high-level
programming language with dynamic semantic. Its high-level built
in data structures, combined with dynamic typing and dynamic
binding, make it very attractive for Rapid Application
Development, as well as for use as a scripting or glue language to
connect existing components together.