Seminarbionafide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

DECISION MAKING AND GAME THEORY USING

ARTIFICIAL INTELLIGENCE
SEMINAR-2 REPORT
Submitted by
SAI SUSHANTH C (RA2111028020054)
RIZWAN AHMED JAWED(RA2111028020049)
RAHUL S(RA2111028020052)
Under the guidance of

Dr. C.G.Balaji,
Dr. S.S.Sathya,
(Assistant Professors, Department of Computer Science and Engineering) In
partial fulfillment for the award of the degree
of

BACHELOR OF TECHNOLOGY
in

COMPUTER SCIENCE AND ENGINEERING


with specialization in cloud
computing
of

COLLEGE OF ENGINEERING AND TECHNOLOGY

SRM INSTITUTE OF SCIENCE AND


TECHNOLOGY RAMAPURAM, CHENNAI-600089.

MAY 2024
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
(Deemed to be University Under Section 3 of UGC Act, 1956)

BONAFIDE CERTIFICATE

Certified that the Seminar-2 report titled “DECISION MAKING AND GAME THEORY
USING ARTIFICIAL INTELLIGENCE
” is the bonafide work of “SAI SUSHANTH .C[RA2111028020054], RIZWAN AHMED
JAWED [RA2111028020049], RAHUL .S[RA2111028020052]”submitted for
the course 18CSP103L Seminar – 2. This report is a record of successful completion of the
specified course evaluated based on literature reviews and the supervisor. No part of the
Seminar Report has been submitted for any degree, diploma, title, or recognition before.

SIGNATURE SIGNATURE

Dr.C.B.Balaji,
Dr.S.S.Sathya, Dr. K. RAJA, M.E., Ph.D.,
Assistant Professor Professor and Head
Dept. of Computer Science & Engineering Dept. of Computer Science & Engineering
SRM Institute of Science and Technology SRM Institute of Science and Technology
Ramapuram, Chennai. Ramapuram, Chennai.

Submitted for the Seminar-2 Viva Voce Examination held on at SRM Institute of Science and
Technology, Ramapuram, Chennai-600089.

EXAMINER 1 EXAMINER 2
TABLE OF CONTENTS

C. No. Title Page No

ABSTRACT iii
LIST OF FIGURES vi

LIST OF ANCRONYMS AND ABBREVATIONS vii

1 INTRODUCTION 1

1.1 Objective of the project ........................................................... 1


1.2 Problem Statement ................................................................. 1
1.3 Project Domain ...................................................................... 2
1.4 Scope of the Project ............................................................... 3

2 PROJECT DESCRIPTION 4

2.1 Existing System ..................................................................... 4


2.2 Literature Review .................................................................. 5
2.3 Issues in Existing System ...................................................... 7
2.4 Software Requirements ......................................................... 7

3 DESIGN 8

3.1 Proposed System.................................................................... 8


3.2 Architecture Diagram ............................................................ .9
3.3 Design Phase .......................................................................... 10
3.4 Use Case Diagram ................................................................. 11

iv

3.5 Data Flow Diagram ............................................................... 12


3.6 Deployment Diagram ............................................................ 13
3.7 Module Description ............................................................... 14
3.7.1 Login Module ............................................................ 15

4 RESULTS AND DISCUSSION 16

4.1 Result ..................................................................................... 16

4.2 Discussion and Interpretation………………………………17

5 CONCLUSION AND FUTURE ENHANCEMENT 18


5.1 Conclusion ............................................................................ 18
5.2 Future Enhancement ............................................................. 18
5.3 Version 2.0 Enhancement ...................................................... 20

REFERENCES ....................................................................................... 23

LIST OF FIGURES

S.NO FIGURE NAME PAGE.NO


3.1 Architecture Diagram 9

3.2 Use Case Diagram 11

3.3 Data Flow Diagram 12

3.4 Deployment Diagram 13

3.5 Experiment 15

vi
LIST OF ANCRONYMS AND ABBREVATIONS

ISLR : Indian Sign Language Recognition


VFL : Video Finger Language

VIFR : Video Indian Finger

IVF : Indian Video Finger

Chapter 1

INTRODUCTION

In today's diverse and interconnected world, effective communication is essential for


fostering understanding and inclusivity among individuals of different linguistic and
cultural backgrounds. However, for the deaf and hard of hearing community, traditional
modes of communication often fall short, leading to barriers in expressing thoughts, ideas,
and emotions. Indian Sign Language (ISL) serves as a vital means of communication for
millions in India, yet its recognition and interpretation remain challenging, hindering
seamless interaction and accessibility for the deaf.

Advancements in technology have paved the way for innovative solutions to bridge these
communication gaps. Among them, the development of Indian Sign Language Recognition
(ISLR) systems holds significant promise. By leveraging the power of computer vision and
machine learning, these systems aim to accurately interpret ISL gestures, thereby
facilitating better communication between the deaf and hearing communities.

Our research focuses on the development of an ISLR system that integrates video analysis
with finger image processing. Traditional ISLR systems primarily rely on video data for
gesture recognition, but they often face limitations in accuracy and robustness, particularly
in complex signing scenarios or under varying lighting conditions. By incorporating finger
image analysis alongside video data, our approach seeks to enhance the precision and
reliability of ISL recognition, thereby improving overall communication effectiveness.

1.1 Objective of the Project

The objective of a project for Indian Sign Language (ISL) recognition based on a
video accompanied by finger images would likely be to develop a system or
technology that can accurately interpret and understand ISL gestures captured in video
format, with additional focus on finger images for more precise recognition.

1.2 Problem Statement

Develop a robust system for recognizing ISL gestures from video recordings accompanied by
finger images. Achieve high accuracy and real-time performance in recognizing a wide range of
ISL gestures. Address challenges such as variations in hand shapes, movements, lighting
conditions, and occlusions. Explore efficient methods for extracting features from both video
and finger image data to capture spatial and temporal information effectively. Investigate
techniques for integrating information from both modalities to improve recognition
performance. Design a user-friendly interface for real-world applications, enabling seamless
interaction between users and the recognition system.

1.3 Project Domain

The project domain focuses on Indian Sign Language Recognition (ISLR) utilizing video
input supplemented by finger images. ISLR aims to develop algorithms and systems
capable of interpreting and translating sign language gestures into text or speech. By
incorporating finger images, the system enhances accuracy and robustness, capturing
subtle nuances and variations in hand movements crucial for effective communication. This
interdisciplinary endeavor combines computer vision, machine learning, and linguistics to
bridge the communication gap for the hearing impaired community in India. The project
holds promise for empowering individuals with hearing disabilities by facilitating seamless
interaction in various contexts.

1.4 Scope of the project

The project aims to develop a robust system for Indian Sign Language (ISL) recognition
by analyzing video footage accompanied by finger images. ISL is a visual gestural
language used primarily by deaf and hard-of-hearing individuals in India for
communication.

The scope of the project involves several key components:

1.Data Acquisition and Preprocessing: The first step involves collecting a comprehensive
dataset of ISL gestures captured through video recordings along with corresponding
finger images. The data would need to cover a wide range of gestures and variations to
ensure the system's effectiveness. Preprocessing techniques will be applied to clean the
data and extract relevant features.

2.Gesture Recognition Model: Building a deep learning model for ISL gesture
recognition is a crucial aspect of the project. Convolutional Neural Networks (CNNs) can
be employed to analyze the visual information from the video frames, while recurrent
neural networks (RNNs) or transformer architectures can capture temporal dependencies
in the gestures. Transfer learning techniques may also be explored to leverage pre-trained
models on similar tasks.

3.Finger Image Analysis: Finger images play a significant role in ISL communication, as
handshapes and finger movements convey specific meanings. Therefore, a separate
model will be developed to analyze finger images. This may involve techniques such as
image segmentation, feature extraction, and pattern recognition to interpret the position
and configuration of fingers accurately.

4.Integration and Fusion: The outputs from the gesture recognition model and finger
image analysis model will be integrated to provide a comprehensive understanding of the
signed gestures. Fusion techniques such as concatenation, averaging, or attention
mechanisms can be employed to combine the information from both sources effectively.

5.User Interface Development: To make the system accessible and user-friendly, a


graphical user interface (GUI) will be developed. This GUI will allow users to interact
with the system by inputting video footage or live streams of ISL gestures. The
recognized gestures will be displayed along with corresponding translations or text
outputs.

6.Testing and Evaluation: Extensive testing and evaluation will be conducted to assess the
performance and accuracy of the developed system. This includes testing with different
variations of gestures, lighting conditions, and background clutter to ensure robustness
and generalizability.
7.Deployment and Accessibility: Once the system has been thoroughly tested and
optimized, efforts will be made to deploy it for real-world applications. This may involve
collaboration with organizations working with the deaf and hard-of-hearing community
to integrate the system into assistive technologies or educational platforms.

Chapter 2

PROJECT DESCRIPTION

2.1 Existing System

As of my last update, there hasn't been a specific system tailored for Indian Sign
Language (ISL) recognition based on videos accompanied by finger images exclusively.
However, there have been developments in sign language recognition systems using
various techniques such as computer vision and machine learning.

Data Collection: Gather a large dataset of videos featuring ISL gestures along with
corresponding finger images. This dataset would need to cover a wide range of gestures
and variations.

Preprocessing: Preprocess the videos and finger images to enhance quality, remove noise,
and standardize features for analysis.

Feature Extraction: Extract relevant features from both the video frames and finger
images. For the video, this might involve extracting key points of the signer's hand and
movements using techniques like keypoint detection or motion tracking. For finger
images, features could include the positions and configurations of fingers.

Model Training: Train a machine learning or deep learning model using the extracted
features. This model would learn to recognize ISL gestures based on the input video and
finger images.

Validation and Testing: Validate the model using a separate dataset and test its
performance in real-world scenarios to ensure accuracy and reliability.

2.2 Literature Review


S.NO TITLE AUTHOR ABSTRACT YEAR

A Survey of
Indian Sign Language
Sharma, R., Patel, S., & This survey paper provides
1. Recognition Singh, A.
Systems an overview of existing
Indian Sign Language 2020
recognition approaches,
techniques, datasets, and
challenges..

2
Finger Image Gupta, N., & Kumar, V. The paper
Processing Techniques for explores various
Sign Language methods for
Recognition processing finger 2018
images in the
context of sign
language
recognition,
including feature
extraction and
classification
Deep Learning
Approaches for Patel, M., &
Video-Based Sign Desai, P.
3
Language
Recognition This study investigates the
2021
effectiveness of deep learning
models, such as CNNs and
RNNs, for recognizing signs
in video sequences of Indian
Sign Language.
The authors discuss the unique
Khan, A., & Reddy, S.
4 Challenges and challenges faced in
Opportunities in recognizing Indian Sign
Language from video data
Indian Sign
accompanied by finger images,
Language 2019
and propose potential
Recognition Using solutions.
Videos and Finger Images

This paper compares


5. A Comparative Analysis of Mishra, S., & Jain, R. different datasets
Datasets for Indian available for Indian Sign
Sign Language Language recognition,
Recognition evaluating their size,
diversity, annotation 2022
quality, and suitability for
training models.

2.3 Issues in the Existing System

Some of the issues in existing weather forecasting models include:

1.Variability in Gestures : Sign language involves a wide range of gestures and


movements, which can vary significantly between different regions and even
individual signers. This variability makes it challenging to develop a robust
recognition system that can accurately interpret all possible gestures.

2.Complexity of Hand Movements : ISL involves intricate hand movements,


including finger spelling and various hand shapes. Capturing and analyzing these
movements accurately requires advanced computer vision techniques, especially
when dealing with rapid or subtle gestures.

3. Ambiguity in Gesture Interpretation : Some gestures in sign language can have


multiple meanings depending on context, facial expressions, and body movements.
Disambiguating between these meanings requires contextual understanding, which
may be challenging for computer vision systems.

2.4 Software Requirements


iOS
iOS 13 or above

Android Android OS 5.0 or above

Mac OS X 10.11 or above

Windows Windows 7 or above

Chapter 3

DESIGN

3.1 Proposed System

Component of the Proposed System:

Video Input Module: This module captures the input video containing the sign language
gestures performed by the user.

Finger Image Extraction: This component extracts finger images from the video frames.
Techniques such as background subtraction, hand detection, and finger tracking may be
used to isolate the fingers.

Preprocessing: The extracted finger images may undergo preprocessing steps such as
noise reduction, normalization, and enhancement to improve their quality and
consistency.

Feature Extraction: Features relevant to finger movements, such as finger joint positions,
angles, trajectories, or hand shape descriptors, are extracted from the preprocessed finger
images. These features capture the distinctive characteristics of different sign gestures.
3.2 Architecture diagram
Sign Language Recognition System Architecture

3.3 Design Phase

The Design Phase consists of the UML diagrams to design and construct the project.

1. Use Case Diagram


2. Data flow Diagram
3. Deployment Diagram

3.4 Use Case Diagram


Weather Forecasting Use Case Diagram

3.5 Data Flow Diagram


Data Flow Diagram For Sign Language Recognition

3.6 Deployment Diagram


Deployment diagram

3.7 Module Description

The module you're describing sounds fascinating! Indian Sign Language (ISL) recognition is an
essential technology for enabling communication and accessibility for the deaf and hard of
hearing community in India. Let's break down what this module might entail:

1. Video Input: The module likely takes video as input, which captures the hand gestures
and movements of a person communicating in ISL. The video could be captured from
various sources like webcams, smartphones, or recorded videos.
2. Finger Image Extraction: Within the video frames, the module would need to identify
and extract images of the fingers and hand gestures. This step is crucial for isolating the
relevant features for sign language recognition.

3. Feature Extraction: Once the finger images are extracted, the module would extract
features from these images. These features could include finger positions, movement
trajectories, hand shapes, and any other relevant characteristics that help in distinguishing
different signs in ISL.

4. Recognition Algorithm: The core of the module would involve a recognition algorithm
that analyzes the extracted features and matches them to known ISL gestures or signs. This
algorithm could be based on machine learning techniques such as deep learning, where neural
networks are trained on a dataset of ISL gestures to learn the patterns and variations in sign
language.

5. Database of ISL Signs: The module would likely rely on a database of ISL signs,
containing representations of various gestures along with their corresponding meanings. This
database serves as the reference for the recognition algorithm to compare the extracted
features against.

6. Real-time Processing: For practical applications, real-time processing capability would be


desirable, allowing the module to recognize signs as they are being made in the video stream.

3.7.1 Login Module


Chapter 4

RESULTS AND DISCUSSION

4.1. RESULTS ON INDIAN SIGN LANGUAGE RECOGNITION BASED ON A


VIDEO ACCOMPANIED BY THE FINGER IMAGES

Indian Sign Language Recognition (ISLR) has garnered significant attention in recent
years due to its potential to enhance communication accessibility for the deaf and hard of
hearing community. With the proliferation of digital technologies, there has been a
growing emphasis on developing accurate and efficient systems for recognizing sign
language gestures. One promising approach involves utilizing video data accompanied by
finger images to improve recognition accuracy and robustness.

The recognition of Indian Sign Language poses several unique challenges compared to
spoken language recognition. Sign language involves complex hand gestures, facial
expressions, and body movements, making it a multimodal communication system.
Additionally, variations in signing styles and regional dialects further complicate the
recognition process. Therefore, designing effective ISLR systems requires addressing
these challenges through advanced computer vision and machine learning techniques.

Video data plays a crucial role in ISLR as it provides temporal information about sign
gestures, facilitating better understanding and recognition. By analyzing sequential
frames in videos, ISLR systems can capture the dynamics and nuances of sign language
gestures, improving recognition accuracy. However, processing video data directly can be
computationally intensive and may require sophisticated algorithms for real-time
performance.

Accompanying the video data with finger images adds another dimension to ISLR
systems. Finger images capture detailed information about hand configurations and
movements, which are essential for accurately recognizing sign language gestures. By
focusing on finger images, ISLR systems can mitigate some of the challenges associated
with variations in lighting conditions, background clutter, and occlusions in video data.
Moreover, finger images provide a more compact representation of sign gestures,
enabling faster processing and reduced computational overhead.

Integration of video data and finger images allows ISLR systems to leverage the strengths
of both modalities for improved recognition performance. By combining spatial
information from finger images with temporal dynamics from video sequences, these
systems can achieve higher accuracy and robustness in recognizing a wide range of sign
language gestures. Furthermore, the complementary nature of video and finger image
data helps mitigate the limitations of individual modalities, resulting in more reliable
recognition outcomes.

To develop effective ISLR systems based on video and finger images, researchers employ
a variety of techniques from computer vision, machine learning, and signal processing
domains. Convolutional neural networks (CNNs) are commonly used for extracting
features from video frames and finger images, capturing spatial patterns and temporal
dynamics. Recurrent neural networks (RNNs) and long short-term memory (LSTM)
networks are employed to model temporal dependencies in video sequences, facilitating
sequence-based gesture recognition.

In conclusion, Indian Sign Language Recognition based on video accompanied by finger


images holds promise for improving communication accessibility for the deaf and hard of
hearing community. By leveraging the complementary strengths of video and finger
image data, ISLR systems can achieve higher accuracy and robustness in recognizing
sign language gestures. Continued research and development in this field are essential for
advancing the state-of-the-art in ISLR and enabling inclusive communication
technologies.agricultural operations, and travel with greater confidence.

4.2. Discussion and Interpretation:

Indian Sign Language (ISL) recognition based on video accompanied by finger images is
a fascinating area of research with significant potential to improve communication and
accessibility for the deaf and hard of hearing community. Here are some points to
consider in the discussion:

Challenges in ISL Recognition: ISL recognition poses several challenges due to the
complexity and variability of hand gestures, variations in signing styles among
individuals, occlusions, lighting conditions, and background clutter. Developing robust
algorithms to address these challenges is crucial for accurate recognition.

Multimodal Approach: Combining video and finger images can provide complementary
information for ISL recognition. Video captures the dynamic motion and spatial
relationships between different parts of the hand and body, while finger images offer
detailed information about finger configurations and movements..

Real-world Applications: ISL recognition systems have various practical applications,


including assistive technologies for the deaf and hard of hearing, educational tools for
learning sign language, and communication aids for facilitating interactions between deaf
and hearing individuals.

Chapter 5
CONCLUSION AND FUTURE ENHANCEMENT

5.1 Conclusion

The development of Indian Sign Language Recognition (ISLR) systems utilizing video
data coupled with finger images marks a significant advancement in bridging
communication gaps for the deaf and hard-of-hearing community. Through this approach,
researchers have aimed to create more accurate and efficient systems for interpreting sign
language gestures.

In conclusion, the integration of video data and finger images provides a


multidimensional perspective, capturing both the dynamic movements and intricate
details of hand gestures inherent in sign language. This comprehensive approach
enhances the accuracy and robustness of ISLR systems, thereby facilitating better
communication and accessibility for individuals who rely on sign language as their
primary means of communication.

However, further research and development are necessary to improve the scalability,
adaptability, and real-world applicability of ISLR systems. Addressing challenges such as
variability in sign language gestures, environmental factors, and user-specific preferences
will be crucial in advancing the field and ensuring the widespread adoption of these
technologies.

Overall, the fusion of video data with finger images represents a promising direction for
the advancement of ISLR systems, offering new possibilities for enhancing
communication accessibility and inclusivity for the deaf and hard-of-hearing community
in India and beyond.

5.2. Future Enhancement:

To enhance Indian Sign Language (ISL) recognition based on a video accompanied by


finger images, several approaches can be considered:

1. Deep Learning Models: Implement deep learning architectures such as convolutional


neural networks (CNNs) and recurrent neural networks (RNNs) to learn features from
both the video frames and finger images simultaneously. Models like two-stream
networks can be employed, where one stream processes video frames and the other
processes finger images.

2. Data Augmentation: Augment the dataset with variations in lighting conditions, hand
orientations, backgrounds, and finger configurations to improve the robustness of the
model. This helps the model generalize better to different scenarios.
3. Joint Modality Fusion: Develop fusion techniques to combine information from video
frames and finger images effectively. Techniques such as early fusion, late fusion, or
multi-stage fusion can be explored to integrate information from both modalities.

4. Attention Mechanisms: Incorporate attention mechanisms to focus on relevant parts of


the input data, especially in the presence of multiple moving objects or distractions in
the background. This can help the model to selectively attend to the hand and fingers
during sign language recognition.

5. Transfer Learning: Utilize pre-trained models on related tasks such as gesture


recognition or hand pose estimation to bootstrap the training process and improve
performance, especially when labeled data is limited.

6. User Feedback Integration: Incorporate mechanisms for user feedback during inference
to correct recognition errors in real-time, thereby improving the system's accuracy over
time.

7. Real-time Performance Optimization: Optimize the model architecture and algorithms


for real-time performance on resource-constrained devices, enabling deployment in
applications such as mobile phones or wearable devices.

8. User-Centric Design: Involve the deaf and hard-of-hearing community in the


development process to ensure that the system meets their specific needs and
preferences. This may involve customizable interfaces, personalized recognition
models, and feedback mechanisms tailored to individual users.

9. Continuous Learning: Implement techniques for continuous learning to adapt the


model to new signs, variations, and user-specific gestures encountered in real-world
scenarios, ensuring that the system remains up-to-date and adaptive.

By integrating these approaches, a future enhancement for Indian Sign Language


recognition based on both video and finger images can achieve greater accuracy,
robustness, and usability, ultimately improving communication accessibility for the deaf
and hard-of-hearing community.

5.3.Version 2.0 Enhancement:

Developing a version 2.0 enhancement for Indian Sign Language (ISL) recognition,
incorporating both video and finger images, is a fascinating endeavor with immense
potential to improve accessibility and communication for the deaf and hard of hearing
community. Here's a structured approach you might consider:

1. Data Collection and Preprocessing:


- Gather a diverse dataset of ISL gestures, including various hand shapes,
movements, and expressions.
- Collect videos of these gestures from different angles and lighting conditions.
- Extract finger images from the videos using computer vision techniques, ensuring
accuracy and consistency.

2. Feature Extraction:
- Utilize deep learning techniques to extract features from both the video frames and
the finger images.
- For video frames, consider techniques like 3D convolutional neural networks
(CNNs) or recurrent neural networks (RNNs) to capture temporal dependencies.
- For finger images, explore methods such as image segmentation and feature
extraction to capture relevant characteristics.

3. Model Architecture:
- Design a hybrid neural network architecture that can effectively fuse
information from both the video and finger image modalities.
- Implement attention mechanisms to focus on relevant parts of the input data. -
Experiment with architectures like multi-input/multi-output networks or Siamese
networks to handle multiple modalities.

4. Training and Optimization:


- Divide the dataset into training, validation, and test sets.
- Train the model using appropriate loss functions, such as categorical cross-entropy for
classification tasks.
- Regularize the model to prevent overfitting, employing techniques like dropout or batch
normalization.
- Optimize hyperparameters using techniques like grid search or random search.

5. Evaluation:
- Evaluate the model's performance using standard metrics like accuracy, precision,
recall, and F1-score.
- Conduct qualitative analysis by visualizing the model's predictions and comparing
them with ground truth labels.
- Assess the model's robustness to variations in lighting, background, and hand
orientation.

6. User Interface Integration:


- Develop a user-friendly interface that accepts video input and displays recognition
results in real-time.
- Provide feedback mechanisms to improve user interaction and model performance.
- Ensure compatibility with different devices and operating systems to maximize
accessibility.

7. Deployment and Feedback:


- Deploy the enhanced ISL recognition system in real-world settings, such as
educational institutions or communication platforms.
- Gather feedback from users, including deaf and hard of hearing individuals,
educators, and developers, to identify areas for improvement.
- Iterate on the system based on user feedback and emerging technologies to
enhance its effectiveness and usability.

Conclusion with Gratitude:

In conclusion, the development of Indian Sign Language (ISL) recognition based on


video and finger images represents a significant advancement in accessibility and
inclusion for the deaf and hard of hearing community in India. By leveraging cuttingedge
technology, this system has the potential to bridge communication gaps and enhance the
quality of life for individuals who use ISL as their primary means of communication.

Through the recognition of ISL gestures from video and finger images, this technology
offers real-time translation capabilities, facilitating smoother interactions between the
deaf community and hearing individuals. This not only promotes better communication
but also fosters understanding and empathy across linguistic barriers.

Moreover, the implementation of this system signifies a step forward in leveraging


artificial intelligence for social good. By prioritizing the needs of marginalized
communities, we can create more inclusive societies where everyone has equal access to
communication tools and resources.

As we express gratitude for the development of this innovative technology, it's essential
to acknowledge the collaborative efforts of researchers, developers, and the deaf
community itself. Their dedication and insights have been instrumental in shaping this
transformative solution.

Looking ahead, continued research and refinement of ISL recognition systems hold the
promise of further enhancing communication accessibility and empowering individuals
within the deaf community. Together, let us celebrate this milestone while remaining
committed to advancing technology for the betterment of all..

REFERENCES

[Here are some references for Indian Sign Language (ISL) recognition based on video
accompanied by finger images:

1. S. K. Srivastava, P. Pandey, A. Mittal, and V. Namboodiri, "Deep Learning based


Indian Sign Language (ISL) Recognition using a Fusion of Depth and RGB Finger
Images", *IEEE International Conference on Automatic Face & Gesture Recognition
(FG)*, 2019.

2. A. T. Krishna, A. Bhattacharya, A. K. Tripathi, and P. K. Biswas, "Recognition of


Indian Sign Language (ISL) Gestures Using Finger Contour Features", *International
Conference on Signal Processing and Communications (SPCOM)*, 2018.

3. A. N. Das, A. Chatterjee, and R. Saha, "Recognition of Indian Sign Language (ISL)


Gestures Using Convolutional Neural Networks (CNN)", *International Conference on
Computing, Communication & Automation (ICCCA)*, 2017.

4. A. Tiwari, P. Singhal, and S. R. Dubey, "Recognition of Indian Sign Language (ISL)


Gesture Using a Hybrid Approach of Depth Information and Hand Shape Features",
*International Conference on Signal Processing, Communication, Power and Embedded
System (SCOPES)*, 2016.

5. N. Kumar, R. C. Jain, and R. Kumar, "Indian Sign Language (ISL) Recognition using
Contourlet Transform and Deep Belief Network", *International Conference on
Advanced Computing and Communication Systems (ICACCS)*, 2015.

These references cover various approaches and techniques for recognizing Indian Sign
Language gestures using video and finger images. You can explore these papers for more
detailed information on the methodologies and results.

You might also like