B551 - Minor Project Report - 10015 10021
B551 - Minor Project Report - 10015 10021
B551 - Minor Project Report - 10015 10021
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
SRM Institute of Science and Technology, for the facilities extended for the project
We extend our sincere thanks to Dean-CET, SRM Institute of Science and Technology,
Computing, SRM Institute of Science and Technology, for her support throughout the
project work.
Technology, for her suggestions and encouragement at all the stages of the project
work.
We want to convey our thanks to our Project Coordinators, Dr. M. Kanchana, Dr. G.
Usha, Dr. R. Yamini and Dr. K. Geetha, Panel Head, Dr. C Jothi Kumar, Associate
We register our immeasurable thanks to our Faculty Advisor, Dr. Vidhya S, Assistant
iv
Our inexpressible respect and thanks to our guide, Dr. C Jothi Kumar, Associate
Technology, for providing us with an opportunity to pursue our project under her
mentorship. She provided us with the freedom and support to explore the research
topics of our interest. Her passion for solving problems and making a difference in the
We sincerely thank all the staff and students of Computing Technologies Department,
School of Computing, S.R.M Institute of Science and Technology, for their help during
our project. Finally, we would like to thank our parents, family members, and friends
v
ABSTRACT
vi
TABLE OF CONTENTS
ABSTRACT vi
LIST OF FIGURES ix
LIST OF SYMBOLS AND ABBREVIATIONS xi
1. INTRODUCTION 1
1.1 Introduction 1
1.2 Problem statement 2
1.3 Objectives 3
1.4 Scope 4
1.5 Significance 6
2 LITERATURE SURVEY 8
2.1 Existing research 8
2.1.1 Sign Language Recognition Based on Computer Vision
2.1.2 Sign Language Action Recognition System Based on
Deep Learning
2.1.3 Indian Sign Language Translation using Deep Learning
2.1.4 Indian Sign Language Gesture Recognition Using Deep
Convolutional Neural Network
2.1.5 Detecting and Identifying Sign Languages through
Visual Features
2.1.6 Real-Time Sign Language Detection Using CNN
3 PROPOSED MODEL 13
3.1 Input data 14
3.2 Graph construction 15
3.3 Spatial Processing 16
3.4 Temporal Modeling 17
3.5 Multimodal Fusion 18
3.6 Output layer 20
4 METHODOLOGY 22
4.1 Convolutional Neural Networks 22
4.2 Long Short-Term Memory 23
4.3 MST-GNN 25
4.4 Data Collection 30
4.5 Data Preprocessing 31
5 IMPLEMENTATION 32
5.1 Tools and technologies used 32
5.2 Coding Details 33
6 RESULTS AND DISCUSSION 41
6.1 Output 41
6.2 Experimental Details 42
7 CONCLUSION 46
7.1 Summary 46
7.2 Future Work 46
47
REFERENCES
PLAGIARISM REPORT 51
1.
viii
LIST OF FIGURES
1.1 Introduction
Acknowledging this difficulty, our project seeks to create a novel Indian Sign
Language Recognition System in order to overcome this communication gap.
Through the use of state-of-the-art technologies like Multimodal Spatio-Temporal
Graph Neural Networks (MST-GNN), we hope to develop a reliable, real-time
method for precise ISL recognition. Our system aims to capture the complex
spatial and temporal nuances of ISL gestures by combining RGB-D images,
skeletal joint positions, and Convolutional Neural Network (CNN) features within
a dynamic graph framework for analysis. In addition to meeting a pressing societal
need, this work advances the field of multimodal gesture recognition and opens
the door to more accessible and inclusive digital interactions. Through this
project, we hope to empower the community of people who are hard of hearing
by giving them a dependable way to communicate, encouraging inclusivity, and
improving their quality of life in general.
1
1.2 Problem statement
2
1.3 Objectives
Build a strong framework capable of capturing the intricate spatial and temporal
patterns of Indian Sign Language (ISL) gestures by fusing RGB-D images,
skeletal joint positions, and Convolutional Neural Network (CNN) features into a
dynamic graph structure.
In order to process the multimodal data efficiently and guarantee precise real-time
identification of ISL gestures, implement and optimise the Multimodal Spatio-
Temporal Graph Neural Network (MST-GNN).
Provide mechanisms and algorithms that improve the system's flexibility so that
it can handle occlusions, recognise a variety of signing styles, and function well
in different lighting environments.
Make certain that the ISL recognition system that has been developed functions
in real-time, facilitating smooth communication and interaction for people with
3
hearing impairments.
Conduct thorough tests and analyses to verify the precision, effectiveness, and
flexibility of the created system, contrasting its outcomes with current approaches.
Provide a user-friendly interface and make sure the system works with different
platforms so that the hearing-impaired community can benefit from accessibility
and inclusivity in digital communication.
1.4 Scope
4
The developed ISL recognition system will find use in a number of areas, such as
digital platforms that are accessible, gesture-controlled interfaces, interactive
learning tools, communication devices, inclusive education, and public
accessibility services. The hearing-impaired community will be empowered by
the system's accuracy and adaptability, which will allow for efficient
communication in a variety of settings and situations.
The project's scope also includes carrying out in-depth tests and assessments to
confirm the system's functionality and demonstrate its superiority over current
approaches. The research results and techniques created for this project will add
significant value to the field of multimodal gesture recognition, advancing
academic understanding and propelling the development of assistive
technologies.
5
1.5 Significance
6
1.5.5 Enabling Digital Accessibility
By allowing the population of hearing-impaired people to independently access
digital content and services, the ISL recognition system promotes digital
accessibility. It guarantees that websites, apps, and online platforms are available
to all users, regardless of their communication skills.
7
CHAPTER 2
LITERATURE REVIEW
Post review of most of the literature which were available to us via different
publications or established sites regarding our research topic, we came up with
few interesting features or attributes which can be implemented to our project
product in future enhancements. The purpose of this research is to create
familiarity with current thinking and the already researched products on this
specific topic, and may justify future research into previously overlooked or
understudied areas.
Numerous fields and disciplines are intersected in the study of sign language. Data
gloves and visual sign language recognition are currently the two main research
areas in sign language recognition. While the latter uses the camera to capture the
user's hand characteristics for sign language recognition and translation, the
former uses the data collected by the sensor for these purposes. The improved
convolutional neural network (CNN) and long short-term memory (LSTM) neural
network combined sign language recognition system presented in this paper
differs from the existing system not only in terms of sign language translation and
recognition but also in terms of the sign language generation function. This system
utilises a GUI interface designed with PyQt for the first time. After logging in,
users can choose between the system's translation and sign language recognition
8
features, OpenCV image capture, and the trained CNN neural network for
additional processing. Using LSTM decisions, the model can then recognise
American sign language. Additionally, the user has the option to click the voice
button, in which case the system will use the user's voice to write to the video file
and convert the corresponding gesture image into the same pixels. According to
experimental results, the recognition rate of sign language [6] (which consists of
Arabic numerals and American sign language) is 90.3%, and the recognition rate
of similar algorithms [5] is 95.52%.
9
2.1.3 Indian Sign Language Translation using Deep Learning
The people with disabilities who live in the Indian subcontinent communicate
with one another using Indian Sign Language. Unfortunately, not many people
are aware of Indian Sign Language's semantics. Three deep architectures are
presented in this work to translate an Indian Sign Language sentence into an
English language sentence given a video sequence. We have attempted to use
three different strategies to solve this issue. The first method uses an LSTM-based
Sequence to Sequence (Seq2Seq) model; the second uses an LSTM-based
Seq2Seq model that makes use of attention; and the third method uses an Indian
Sign Language Transformer. The transformer model produced a flawless BLEU
score of 1.0 on test data when these models were assessed based on BLEU scores.
10
output. Our accuracy rate is now 93%.
11
order to identify real-time sign language, we first created a dataset comprising 11
sign words. Our customised CNN model was trained using these sign words. Prior
to the CNN model being trained, we preprocessed the dataset. Our results show
that on the test dataset, the customised CNN model can achieve the highest results:
98.6% accuracy, 99% precision, 99% recall, and 99% f1-score.
12
CHAPTER 3
PROPOSED MODEL
The system is able to capture the depth and fluidity of Indian Sign Language
(ISL) gestures in real-time. It comprehends rather than merely recognising. Its
versatility equals its accuracy, guaranteeing that the rich tapestry of ISL with all
of its nuanced details is fully and inclusively captured. For the community of
people with hearing loss, this system serves as a bridge to the outside world as
well as a tool. Not only does it help with communication, but it also creates a
connection that goes beyond sound barriers to allow ideas, thoughts, and
emotions to freely flow.
13
Fig 3.1 Architecture Diagram
3.1.1 Images
Images integrate depth (D) and colour information (RGB). While depth encodes
spatial information by indicating an object's distance from the camera, colour
encodes visual features. Visual context is supplied to the network by this rich
data source.
14
3.1.2 Skeletal Joints
Skeletal joints are critical locations or points that are taken from depth
information. They depict the actual spatial locations of important body parts, like
the head, hands, and limbs. These points act as reference points for
comprehending postures and gestures made by people.
15
spatial dependencies amongst nodes. The edges could stand for the temporal
order of CNN features or the spatial proximity of skeletal joints, for instance.
16
the weights of skeletal joints or features that are more important for a particular
ISL gesture will be higher, enabling them to contribute more to the spatial
features that the GCNs extract.
To work with the dynamic graph structure that represents the ISL gesture data,
Spatio-Temporal Graph LSTM cells modify the conventional LSTM
architecture for sequence data. They can now simulate how relationships
between various data elements evolve over time thanks to this. Every LSTM cell
is linked to a specific node within the dynamic graph.
The graph's edges link these cells to the nodes that are next to them. They use
17
the node's spatial and temporal context to process and update each node's hidden
state.
18
3.5.2 Weighted Integration
In MST-GNN, multimodal fusion goes beyond merely concatenating features
from various modalities. Rather, it uses weighted integration, in which each
modality's features are given a weight by the network. The weights, which
indicate the significance of each modality for gesture recognition, are acquired
during the training phase. This enables the network to adjust to the unique
properties of various modalities and gestures.
The adaptive nature of the weighted integration process allows it to modify each
modality's contribution according to the particular gesture being recognised and
the context. For example, the fusion layer will give higher weights to CNN
features that contain important information for a given gesture, highlighting their
significance.
19
3.6 Output Layer
The input features are given weights and biases by the FC layer, and a linear
combination of these features is the result. In order to simplify the representation
and get it ready for the last classification stage, this transformation is essential.
The fused features from the multimodal fusion stage are transformed into a
format appropriate for classification by this layer through a series of operations.
20
3.6.3 Classification Result
Each potential ISL gesture class is represented by a vector of probabilities that
make up the output layer's final result. For each recognised gesture, the model
predicts which class has the highest probability.The MST-GNN architecture's
final recognition result is the class with the highest probability, which is regarded
as the most likely gesture. The great accuracy of MST-GNN in identifying ISL
gestures can be attributed to its flexibility in responding to changing spatial and
temporal dynamics as well as its capacity to combine data from several
modalities.
21
CHAPTER 4
METHODOLOGY
A subclass of deep neural networks called CNNs was created especially for tasks
involving visual data and images. Convolutional, pooling, and fully connected
layers make up their composition. CNNs are primarily used to extract feature
spatial hierarchies automatically and adaptively from input images.
Convolutional Layer: To create a feature map that captures spatial patterns, the
convolution operation consists of swiping a filter or kernel over the input data.
22
4.1.2 Formulae
Convolution Operation:
(f∗g)(t)=∫f(τ)g(t−τ)dτ
MaxPooling Operation:
Y(i,j)=maxm,nX(i×s+m,j×s+n)
Forget Gate: Chooses which cell state data should be retained or deleted. It
creates a forget gate output by concatenating the current input xt and the previous
hidden state ht-1 and then passing it through a sigmoid activation.
Input gate: Adds new data to the cell state. It is composed of two layers: a tan(h)
layer that generates a vector of new candidate values, and a sigmoid layer that
determines which values to update.
23
Cell State Update: Ct=ft × Ct+1 + (it * Ct)
Where ft is the forget gate output, it is the input gate output, and Ct is the new
candidate value.
Output Gate: Determines the next hidden state based on the cell state. It uses the
sigmoid activation function and the cell state to compute ℎt.
4.2.2 Formulae
Candidate Value:
Ct = tanh (Wc × [ht-1 , xt ] + bc)
24
4.3 MST – GNN (Multi-modal Spatio Temporal GNN)
25
Fig 4.3 Flowchart diagram
26
4.3.1 Forget Gate (Fg)
For LSTM cells, the forget gate is essential. It determines what data should be
kept and what should be discarded from the previous cell state (C s-1). The
sigmoid activation function α is used by this gate to squash the input values
between 0 and 1. A weighted combination of the current input (vt), the previous
hidden state (hs-1), and the spatial and temporal context (Ispatial and Itemporal) makes
up the formula for the forget gate, fg. The forget gate increases the LSTM's
adaptability and memory by employing these weighted inputs to assess the
applicability of prior knowledge in the present.
An additional essential part of LSTM cells is the input gate, which controls the
addition of new data to the cell state (Cs). It uses a sigmoid activation function,
just like the forget gate, to generate values between 0 and 1. A weighted
combination of the previous hidden state (hs-1), the current input (vt), and the
spatial and temporal context (Ispatial and Itemporal) makes up the formula for the
input gate, Ig. In order to assist the LSTM in adjusting to the incoming data, this
gate determines which new information is necessary for the current time step.
27
4.3.3 Candidate Update (C’u)
The candidate update process calculates a candidate value for the cell state
update. This candidate state (C’u) is determined using the hyperbolic tangent
(tanh) activation function, which squashes values between -1 and 1. The formula
for C’u takes into account a weighted combination of the previous hidden state
(hs-1), the current input (vt), and the spatial and temporal context (Ispatial and
Itemporal). The candidate state represents a potential update to the cell state and
contributes to capturing the current information content.
The impact of the input gate on the candidate update (C’u) and the forget gate on
the prior cell state (Cs-1) are taken into account when updating the cell state. The
multiplication of elements is used to calculate this update. The outcome Cs is a
refined cell state that incorporates significant new information while preserving
pertinent historical information. The LSTM is guaranteed to accurately capture
both short- and long-term dependencies in the data thanks to this dynamic
updating mechanism.
28
4.3.5 Output Gate (Og)
Which data from the cell state should be revealed as the LSTM's hidden state is
decided by the output gate. It makes use of the same sigmoid activation function
(α) as the other gates. A weighted combination of the previous hidden state (h s-
1), the current input (vt), and the spatial and temporal context (Ispatial and Itemporal)
is taken into account in the formula for the output gate, Og. The output gate
determines which portions of the cell state should be made public, which governs
what data is forwarded to further layers or utilised in prediction.
The hidden state (hs) is updated based on the output gate's influence on the cell
state, using the cos h activation function. This update combines information from
the cell state with the output gate's decision to expose certain information. The
hidden state represents the memory and knowledge of the LSTM at a specific
time step and is crucial for capturing the temporal dependencies in the data.
29
4.3 Data Collection
The foundation of our Indian Sign Language Recognition System rested upon a
diverse and comprehensive dataset. Collecting high-quality, representative data
was crucial for the system's training and validation.
30
signer ID, and frame-specific data.
31
CHAPTER 5
IMPLEMENTATION
5.1.2 LabelImg
LabelImg was employed as an annotation tool to mark symbols and gestures in
the dataset. Its user-friendly interface allowed for efficient labeling of RGB-D
images, enabling the creation of a labeled dataset for training and evaluation
purposes.
32
5.1.4 TensorFlow
TensorFlow, an open-source machine learning framework, was employed for
building and training deep learning models. Its flexibility and scalability allowed
for the implementation of complex neural network architectures, including Graph
Convolutional Networks (GCNs) and Spatio-Temporal Graph LSTM Cells.
5.1.5 OpenCV
OpenCV (Open Source Computer Vision Library) was utilized for image and
video processing tasks. It provided essential functionalities for pre-processing
RGB-D images, skeletal joint position extraction, and depth data manipulation,
ensuring accurate data representation.
5.1.6 Scikit-Learn
Scikit-Learn, a machine learning library in Python, was utilized for various tasks,
including data preprocessing, feature selection, and model evaluation. Its easy-to-
use interfaces and robust algorithms supported the development of reliable
machine learning components.
WORKSPACE_PATH = 'Tensorflow/workspace'
SCRIPTS_PATH = 'Tensorflow/scripts'
APIMODEL_PATH = 'Tensorflow/models'
ANNOTATION_PATH = WORKSPACE_PATH+'/annotations'
IMAGE_PATH = WORKSPACE_PATH+'/images'
MODEL_PATH = WORKSPACE_PATH+'/models'
PRETRAINED_MODEL_PATH = WORKSPACE_PATH+'/pre-trained-models'
CONFIG_PATH = MODEL_PATH+'/my_ssd_mobnet/pipeline.config'
CHECKPOINT_PATH = MODEL_PATH+'/my_ssd_mobnet/'
33
5.2.2 Creating the label Map
import os
import glob
import pandas as pd
import io
import xml.etree.ElementTree as ET
import argparse
34
parser.add_argument("-c",
"--csv_path",
help="Path of output .csv file. If none provided, then no
file will be "
"written.",
type=str, default=None)
args = parser.parse_args()
if args.image_dir is None:
args.image_dir = args.xml_dir
label_map = label_map_util.load_labelmap(args.labels_path)
label_map_dict = label_map_util.get_label_map_dict(label_map)
def xml_to_csv(path):
"""Iterates through all .xml files (generated by labelImg) in a given
directory and combines
them in a single Pandas dataframe.
Parameters:
----------
path : str
The path containing the .xml files
Returns
-------
Pandas DataFrame
The produced dataframe
"""
xml_list = []
for xml_file in glob.glob(path + '/*.xml'):
tree = ET.parse(xml_file)
root = tree.getroot()
for member in root.findall('object'):
value = (root.find('filename').text,
int(root.find('size')[0].text),
int(root.find('size')[1].text),
member[0].text,
int(member[4][0].text),
int(member[4][1].text),
int(member[4][2].text),
int(member[4][3].text)
)
xml_list.append(value)
35
column_name = ['filename', 'width', 'height',
'class', 'xmin', 'ymin', 'xmax', 'ymax']
xml_df = pd.DataFrame(xml_list, columns=column_name)
return xml_df
def class_text_to_int(row_label):
return label_map_dict[row_label]
filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []
tf_example = tf.train.Example(features=tf.train.Features(feature={
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(filename),
'image/source_id': dataset_util.bytes_feature(filename),
36
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature(image_format),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text':
dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example
def main(_):
writer = tf.python_io.TFRecordWriter(args.output_path)
path = os.path.join(args.image_dir)
examples = xml_to_csv(args.xml_dir)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())
writer.close()
print('Successfully created the TFRecord file:
{}'.format(args.output_path))
if args.csv_path is not None:
examples.to_csv(args.csv_path, index=None)
print('Successfully created the CSV file: {}'.format(args.csv_path))
if __name__ == '__main__':
tf.app.run()
import tensorflow as tf
from object_detection.utils import config_util
from object_detection.protos import pipeline_pb2
from google.protobuf import text_format
CONFIG_PATH = MODEL_PATH+'/'+CUSTOM_MODEL_NAME+'/pipeline.config'
config = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
37
with tf.io.gfile.GFile(CONFIG_PATH, "r") as
f:
proto_str =
f.read()
text_format.Merge(proto_str, pipeline_config)
config_text =
text_format.MessageToString(pipeline_config)
f.write(config_text)
print("""python {}/research/object_detection/model_main_tf2.py --
model_dir={}/{} --pipeline_config_path={}/{}/pipeline.config --
num_train_steps=5000""".format(APIMODEL_PATH,
MODEL_PATH,CUSTOM_MODEL_NAME,MODEL_PATH,CUSTOM_MODEL_NAME))
import os
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
# Load pipeline config and build a detection model
configs = config_util.get_configs_from_pipeline_file(CONFIG_PATH)
detection_model = model_builder.build(model_config=configs['model'],
is_training=False)
# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(CHECKPOINT_PATH, 'ckpt-6')).expect_partial()
38
@tf.function
def detect_fn(image):
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections
import cv2
import numpy as np
category_index =
label_map_util.create_category_index_from_labelmap(ANNOTATION_PATH+'/label_map
.pbtxt')
cap.release()
# Setup capture
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
while True:
ret, frame = cap.read()
image_np = np.array(frame)
num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'],
detections['detection_classes']+label_id_offset,
39
detections['detection_scores'],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=5,
min_score_thresh=.5,
agnostic_mode=False)
40
CHAPTER 6
6.1 Output
Fig:6.1.1 Hello
Fig:6.1.2 No
41
Fig:6.1.3 Yes
42
6.2.2 Real-Time Processing
43
6.2.6 Comparison with Existing Systems
Performance chart
10
9
8
7
6
5
4
3
2
1
0
accuracy real time adaptability user feedback
Comparative analysis with existing ISL recognition systems revealed that the
proposed system outperformed previous methodologies in terms of accuracy,
real-time processing, and adaptability. The innovative integration of Multimodal
Spatio-Temporal Graph Neural Networks (MST-GNN) significantly contributed
to the system's superior performance.
44
These experimental results validate the effectiveness of the Indian Sign
Language Recognition System, highlighting its accuracy, adaptability, real-time
processing, and usability in diverse scenarios. The system's robust performance
underscores its potential to revolutionize communication for the hearing-
impaired, fostering inclusivity and accessibility in digital interactions.
45
CHAPTER 7
CONCLUSION
7.1 Summary
46
new dimensions for accessible communication. As technology advances,
continuous research and development will ensure our system evolves, continuing
to empower the hearing-impaired community and fostering a more inclusive
society.
47
References
[1] Wanbo Li, Hang Pu, Ruijuan Wang, “Sign Language Recognition Based on
Computer Vision”, 2021 ICAICA, pp. 919-922, 2021.
[2] Md. Nafis Saiful, Abdulla Al Isam, Hamim Ahmed Moon, Rifa Tammana
Jaman, Mitul Das, Md. Raisul Alam, Ashifur Rahman, “Real-Time Sign
Language Detection Using CNN”, 2022 ICDABI, pp. 697–701, 2022.
[4] Pratik Likhar, Dr. Rathna G N, “Indian Sign Language Translation using Deep
Learning” 2021 IEEE 9th Region 10 Humanitarian Technology Conference
(R10-HTC, 2021.
[5] Varsha M∗, Chitra S Nair†, “Indian Sign Language Gesture Recognition Using
Deep Convolutional Neural Network,” 2021 8th ICSCC, pp. 193-197, 2021.
48
[8] Hira Hameed;Muhammad Usman;Muhammad Zakir Khan;Amir Hussain;Hasan
Abbas;Muhammad Ali Imran;Qammer H. Abbasi “Privacy-Preserving
British Sign Language Recognition Using Deep Learning” 2022 44th Annual
International Conference of the IEEE Engineering in Medicine & Biology
Society (EMBC)
49
[14] Zaw Hein;Thet Paing Htoo;Bawin Aye;Sai Myo Htet;Kyaw Zaw Ye “Leap
Motion based Myanmar Sign Language Recognition using Machine Learning”
2021 IEEE Conference of Russian Young Researchers in Electrical and
Electronic Engineering (ElConRus)
50
57
PLAGIAGRISM REPORT: RE-2022-175924-plag-report
ORIGINALITY REPORT
6 %
SIMILARITY INDEX
5%
INTERNET SOURCES
8%
PUBLICATIONS
4%
STUDENT PAPERS
PRIMARY SOURCES
1
umpir.ump.edu.my
Internet Source 5%
2
Submitted to Universiti Sains Malaysia
Student Paper 3%
3
www.jetir.org
Internet Source 1%
4
www.cs.kent.edu
Internet Source <1%
5 Thomas, Elizabeth, Praseeda B Nair, Sherin
N John, and Merry Dominic. Image fusion
using Daubechies complex wavelet transform
and lifting wavelet t r a n s f o r m A
multiresolution approachC, 2IJ4 Annual
International Conference on Emerging Research
Areas Magnetics Machines and Drives
DAICERA/iCMMDE, 2IJ4.
Publication