0% found this document useful (0 votes)
37 views9 pages

Mohammed Maqdoom Jahagirdarp2Yo

The document discusses developing a system to recognize American Sign Language (ASL) gestures using deep learning. It aims to reduce communication gaps between deaf and hearing individuals by translating ASL hand movements into text or speech in real-time. The system will be trained on a dataset of ASL signs and tested for accuracy in real-world use.

Uploaded by

syed saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views9 pages

Mohammed Maqdoom Jahagirdarp2Yo

The document discusses developing a system to recognize American Sign Language (ASL) gestures using deep learning. It aims to reduce communication gaps between deaf and hearing individuals by translating ASL hand movements into text or speech in real-time. The system will be trained on a dataset of ASL signs and tested for accuracy in real-world use.

Uploaded by

syed saad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

International Journal of All Research Education and Scientific Methods (IJARESM),

ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

Action Recognition With American Sign Language


Using Deep Learning
Mohammed Maqdoom Jahagirdar1, Md Naveed Uddin2, Mohd Yousuf3,
Dr. Mohammed Jameel Hashmi4
1,2,3
BE Student, Dept. of Computer Science Engineering, ISL Engineering College
4
HOD-CSE & Associate professor, Dept. of Computer Science Engineering, ISL Engineering College

-----------------------------------------------------------------****************-----------------------------------------------------------------

ABSTRACT

To interact with one another, we humans need a means of communication." Especially abled people", those with
speech or hail diseases," Mute" and" Deaf" people, are always reliant on some form of visual communication.
People who may not have visual or hail impairments may have difficulty communicating with those who do.

To achieve this two- way communication between a person with disabilities and a normal one, a system that can
restate hand gestures into text and speech needs to be developed. Sign language is one of the oldest and most natural
forms of communication. though, due to the limited number of people who know sign language, finding interpreters
can be a tough job. To address this gap, we've developed a real- time fingerspelling system grounded on American
Sign Language (ASL) that utilizes neural networks. This deep learning approach has the capability to significantly
reduce communication gaps.

Sign Language Recognition (SLR) deals with identifying the hand gestures movements and continues till text or
speech is generated for corresponding hand gestures. We can use Deep Learning Computer Vision to identify the
hand gestures by making up Deep Neural Network architecture (Convolution Neural Network Architecture) where
the model will learn to identify the hand gestures through camera over an epoch. Once the model Successfully
recognizes the gesture the corresponding English word is generated.

A Deep learning approach can help in the reduction of communication gaps. The following are the major ways in
system design tracking, segmentation, gesture acquisition, feature extraction, gesture recognition and text to speech
conversion. The Sign Language recognition algorithm is trained on a dynamic dataset of ordinary motions created
by the author. The trained model rightly recognizes the gesture, displays it on the screen in the form of text or word.

Objective
 According to studies, more 80% of especially able individuals is uneducated, and the system trying to connect the
gap between a normal, hearing-impaired person by turning most sign language into text.
 This project is particularly aiming at transforming communication by utilizing deep learning to bridge the gap
between sign language and spoken languages. It's going to interpret hand movements into text immediately
facilitating seamless interaction for all, regardless of hearing or talking abilities.
 Those experiencing hearing loss can use a lot of expressive methods, including sign language and gestures, to
convey their message.
 To model training, it needs deep learning algorithms. The model will collect frames for gestures using the camera,
train the model, and assess the accuracy for each gesture. Then the gesture will be predicted in real-time and
translated into text.
 The aim of this project is to recognize symbolic expression through images so the communication gap between a
regular and hearing-impaired person can be easily bridged by:
a. Creating data concerning to American sign language & pre- process it.
b. Training the pre-processed data with Deep Learning based models to execute sign language recognition & speech
conversion in real time.
c. Testing the model in the real-world scenario.

Page | 39
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

INTRODUCTION

Humans can communicate with one another in a variety of ways. This includes behavior such as physical gestures, facial
expressions, spoken words, etc. Still, those who have hearing loss are restricted to using only hand gestures to
communicate. People with hearing loss and/or speech communicate using standard sign language. But not everyone is
familiar with the signs a motion used in sign language. Understanding sign language and being familiar with all its actions
takes a lot of practice. Since there are no reliable, portable tools for identifying sign language.

Sign language is the communication system for those who are hard of hearing and deaf. It ranks as the sixth most used
language worldwide. It is a type of communication that uses hand movements to communicate ideas. Each region has its
specific sign language like normal language. In 2005 there were an estimated 62 million deaf people worldwide and about
200 different sign languages in use around the world, numerous of which have distinctive features.

ASL is the primary language of many deaf citizens in North America. Hard-of-hearing and hearing people also use it. Hand
gestures and facial expressions are used to convey this language. The deaf community has access to ASL as a means of
communication with the outside world and inside the community. But not everyone is familiar with the signs and motions
used in sign language. Understanding sign language and being familiar with its motions takes a lot of practice and time.
Since there are no reliable, portable tools for identifying sign language and learning sign language. However, since the
development of neural networks and deep learning, it is now possible to create a system that can identify things, or even
objects of different categories

In this project, our primary focus is on creating a model that can recognize hand movements and combine each motion to
form a whole word.

Related Work

In 2014 Paulo Trigueiros, Fernando Ribeiro and Luís Paulo Reis proposed vision based Portuguese sign language
recognition system. This system interprets information in real time to interpret the Portuguese sign language, but the main
disadvantage with this model is that it is trained to recognize only vowels.

In 2019 Parul Goyal introduced Indian Sign Language Recognition Using Soft Computing Techniques. In his work he used
soft computing techniques. Pattern recognition in video data sets is improved using a new way to recognize dynamic
gestures of ISL. Image processing techniques are used here to calculate the direct pixel value, hierarchical centroid, and
local histograms.

In 2020 S. Reshna, A. Sanjeena and M. Jayaraju proposed “Recognition of static hand gestures of Indian sign language
using CNN”. In this methodology experiment is performed on the continuous ISL dataset. A dataset of four images of
Advance, Across, Afraid are used in this project. The major problem faced was that the number of training instances taken
was very few.

In 2020 Tazkia Mim Angona, A. S. M. Siamuzzaman Shaon, Kazi Tahmid Rashad Niloy, Tajbia Karim, Zarin Tasnim, S.
M. Salim Reza, Tasmima Noushiba Mahbub proposed “Automated Bangla sign language translation system for alphabets
by means of MobileNet”. Google's MobileNet version 1 pretrained model, which performs well on the ImageNet dataset, is
used in the proposed method. The transfer learning technique was used to improve the model. This model fails to recognize
BSL letters over a range of backgrounds and lighting conditions.

All the above studies were done specific to their countries or regional sign language. The proposed research work has
considered the most popular sign language that is spoken by millions of people around world especially in the United States
and Canada.

LITERATURE REVIEW

Previous researchers have emphasized their work on the prediction of sign language gestures to support people with hearing
impairments using advanced technologies with artificial intelligence algorithms. Although much research has been
conducted for SLR, there are still limitations and improvements that need to be addressed to improve the hard-of-hearing
community. This section presents a brief literature review of recent studies on SLR using sensors and vision-based deep
learning.

Page | 40
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

Francois et al. [1] also published a paper on Human Posture Recognition in a Video Sequence using methods based on 2D
and 3D appearance. The work mentions using PCA to recognize silhouettes from a static camera and then using 3D to
model posture for recognition. This approach has the drawback of having intermediary gestures which may lead to
ambiguity in training and therefore lower accuracy in prediction.

[5] Barczak, Andre & Reyes, Napoleon & Abastillas, M & Piccio, A & Susnjak, Teo. (2011). A New 2D Static Hand
Gesture Colour Image Dataset for ASL Gestures. Res Lett InfMath Sci. 15. The dataset introduced in this paper by Barczak
et al. is a valuable contribution to developing more robust sign language recognition systems. By incorporating images with
variations in lighting, hand postures, and individuals, the researchers address a key challenge in computer vision:
generalizability.

The paper by Nandy et al. [6] splits the dataset into segments, extracts feature and classifies using Euclidean Distance and
K-Nearest distance. This leverages data segmentation to potentially analyze smaller, more manageable portions of the data.
Feature extraction focuses on capturing the relevant characteristics for classification. Finally, Euclidean distance and KNN
provide mechanisms to classify new data segments based on their similarity to previously labeled data.

Kumud et al. [4] defines how to do Continuous Indian Sign Language Recognition. The paper proposes frame extraction
from video data, pre-processing the data, extracting key frames from the data followed by extracting other features,
recognition and finally optimization. Pre-processing is done by converting the video to a sequence of RGB frames. Each
frame has the same dimensions. Skin color segmentation is used to extract skin regions, with the help of HSV. The images
obtained were converted to binary form. The key frames were extracted by calculating a gradient between the frames. And
the features were extracted from the key frames using oriental histogram. Classification was done by Euclidean Distance,
Manhattan Distance, Chess Board Distance and Mahalanobis

Problem Statement
Interacting with everybody in our modern culture, whether it's for fun or work, is so important. Throughout history,
communication always has a huge impact in every domain and how the meaning of thoughts and expressions is considered,
attracting researchers to bridge this huge gap for every living being.

Speech impairments present a challenge by hindering individual's ability to communicate effectively through speech and
hearing. People who are affected by this use an alternative form of communication like sign language. Yet, mastering sign
language demands extensive practice, and not everyone will comprehend what the gestures in sign language indicate. The
absence of a reliable, convenient tool for learning sign language makes the process so time-consuming.
Individuals with hearing or speech impairments proficient in sign language need a skilled sign language interpreter to
convey their thoughts effectively. This technique assists people with hearing loss or speech impairments in learning and
translating their sign language to help them overcome.

Fig-1 American Sign language

Page | 41
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

Existing System:
1. Data gloves and sensor-based approaches have emerged as an alternative for SLR. While they offer potential benefits
in terms of accuracy, they come with significant drawbacks:
a. Cost and Accessibility: Data gloves can be expensive, creating a financial barrier for many potential users. This
limits the widespread adoption of such technology and hinders its ability to promote inclusivity.
b. Comfort issues: Wearing data gloves for extended periods can be uncomfortable and restrict natural hand and finger
movements. This discomfort can hinder the signing experience and potentially affect the accuracy of gesture
recognition.
2. Some existing sign language recognition systems have limitations. These systems can only recognize the 26 English
letters and solely function with static images. This means they cannot translate real-time words and sentences used in
everyday conversations.

Proposed Work:
The challenges we are tackling in sign language recognition are achieving accurate detection of hand gestures regardless of
various backgrounds settings. Previous research has faced issues with non-uniform backgrounds that are inconsistent and
segmentation of hands.

To tackle this, a system is developed that makes use of Google's Mediapipe solution and the OpenCV library. Mediapipe
helps us detect key landmarks on the hand within the video frame, regardless of the background (car, home, street). This
data are essential for creating our training dataset and carrying out real-time hand gesture recognition.
Once our model is trained, it can extract important features (key points) and then convert these actions into words displayed
as text.

The system comprises three modules that are interconnected:


Dataset Module: These part extracts landmarks from hand movements in videos and forms a thorough database for
training.

Preprocessing Module: In this module, the data extracted from the videos are prepared for input into the final model.

LSTM Module: This module, utilizing Long Short-Term Memory networks, is trained on processed data to recognition
specific hand gestures. Once training is done, the model accepts live video input from a camera, identifies the hand
movements based on the landmarks, and shows the corresponding word in text form on the screen.

Fig-2 Flow of the Proposed System

Page | 42
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

METHODOLOGY

The system employs a vision- based approach for fetching signs, relying on cameras or visual detectors to capture and
interpret hand movements and gestures. Given that all the signs are performed with bare hands, users aren't needed to use
any added tools or devices for interaction. This approach eliminates the need for artificial device similar as sensor-based-
laden gloves or any other tracking devices, allowing for a more natural and convenient user experience. By simply using
their hands, users can communicate with the system seamlessly, making it both accessible and easy to use.

To accurately idenetify the sign gestures and restate them into text & also convert the text to speech, our proposed system
comprises three stages data preprocessing and feature extraction, data cleaning and labelling and gesture recognition &
speech translation. Data preprocessing and feature extraction are carried out by the MediaPipe framework. Here, features
from the face, hands, and body are collected as keypoints and landmarks using built- in data augmentation methods from
sequence of input frames taken from a web camera.

ACTIONS OUTPUT SCREEN

Hello

Thanks

Page | 43
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

I love you

Fig-3: Methodology

Stage 1: Data preprocessing and feature extraction: For data preprocessing and feature extraction from the image, we
applied a multistage pipeline from MediaPipe, called MediaPipe Holistic. MediaPipe Holistic processed individual hand, face
and pose component models for each webcam input frame using a region-specific image resolution.

Stage 2: Data cleaning and labelling: Data cleaning is essential because it avoids failed feature detection, which happens
when a blurry image is submitted to the detector and results in a null entry in the dataset. However, when using this noisy
data for training, the prediction accuracy may suffer, and bias may develop. Labels are constructed for each class and their
associated frame sequences are saved to suit the received data for the subsequent stages of training, testing, and validation

Stage 3: Gesture recognition & Speech translation: Now that we have an LSTM model, we can detect action with a
limited number of frames by training it with the data we have already stored. The number of epochs for the model is decided
upon; as the number of epochs rises, so does the amount of time needed to run the model, and overfitting of the model for
gesture recognition is a possibility. As soon as the model is trained, we can use it to recognize sign language in real time
utilizing the OpenCV module.

System Design

Fig 4: High Level System Architecture.

the essential steps involved in sign language recognition using an LSTM model. Here's a breakdown of each stage:

Camera: This captures the video of the signer using sign language.

Video to Frames: The captured video is broken down into individual frames, like images in a flipbook.

Pre-processing the Frames: These frames might need adjustments to make the LSTM model's job easier. This could
involve:
 Resizing: Standardizing all frames to the same size.
 Gray-scale Conversion: Converting color images to grayscale can reduce complexity.
 Background Removal: Isolating the hand region from the background for better focus.
Page | 44
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

Feature Extraction: Key characteristics of the hand and its movement are extracted from the pre-processed frames. These
features could be:
 Hand Landmarks: Locating key points on the hand like fingertips and palm center.
 Hand Shape: Identifying the overall hand posture (like a fist, open palm, etc.)
 Motion Features: Analyzing how hand position and shape change across frames.

Feeding Features to LSTM Model: The extracted features, like a sequence of data points, are fed into the LSTM model.

Gesture Classification: The LSTM model, trained on a large dataset of signs and their corresponding features, classifies
the sequence of features based on its understanding. The output is the recognized sign language gesture.

CONCLUSION

Conclusion
Using the popular MediaPipe and LSTM, this comprehensive study proposed and developed a system for American Sign
Language recognition. Initially, a folder was created for gestures, and for each gesture, 30 subfolders were created; these
subfolders can be thought of as video folders, and each subfolder contains 30 frames, each of which is in the form of a
NumPy array containing landmark values detected and extracted using Mediapipe Holistic Solution. The data was used to
train the LSTM network, which yielded an accuracy of 80% on testing data. Finally, the system was tested using real-time
data that was directly fed into the model, and the results for each gesture were displayed on the screen. There was some lag
when recognizing gestures in real time. We learned about how sometimes basic approaches work better than complicated
approaches. We also realized the time constraints and difficulties of creating a dataset from scratch.

Future Scope:
We have the potential to create a comprehensive product tailored to assist individuals with speech and hearing impairments,
thus minimizing the communication barrier. By integrating the entire system online, users can employ their cameras for
prompt gesture recognition. This setup mirrors a Zoom call, facilitating effective communication between individuals with
and without hearing impairments. Additionally, we aim to develop a robust system capable of accurately translating speech
into sign language, further enhancing inclusivity and accessibility.

Add a lot more dynamic gestures that are used frequently, as well as more data for each of those motions. The model is
currently trained on a small number of words, but it may be expanded to train on full sentences, alphabets, and integers.
Training with various skin tones, hand postures, lighting, and environmental factors for best results.

Integrating the system with the mobile device. Building an application for mobile phones which will make it possible to use
in real-time. Allowing the user to choose the language to be recognized at any time whenever needed

Acknowledgements
We sincerely thank our college "ISL Engineering College" for giving us a platform to prepare a project on the topic "Action
Recognition with American Sign Language Using Deep Learning" and would like to thank our HOD Dr. Mohammed
Jameel Hashmi for giving us the opportunities and time to conduct and research on the subject. We are sincerely grateful
for Dr. Mohammed Jameel Hashmi as our guide, for providing help during our research, which would have seemed
difficult without their motivation, constant support, and valuable suggestions.

REFERENCES

[1]. Carol Neidle, Ashwin Thangali and Stan Sclaroff, "Challenges in Development of the American Sign Language
Lexicon Video Dataset (ASLLVD) Corpus", 5th Workshop on the Representation and Processing of Sign
Languages: Interactions between Corpus and Lexicon LREC 2012, 2012.
[2]. Neha Baranwal and G. C. Nandi, "Continuous dynamic Indian Sign Language gesture recognition with invariant
backgrounds by Kumud Tripathi", 2015 Conference on Advances in Computing Communications and Informatics
(ICACCI).
[3]. A. L. C. Barczak, N. H. Reyes, M. Abastillas, A. Piccio and T. Susnjak, "A new 2D static hand gesture colour image
dataset for asl gestures", Massey University, 2011.

Page | 45
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

[4]. J. -H. Sun, T. -T. Ji, S. -B. Zhang, J. -K. Yang and G. -R. Ji, "Research on the HandGesture Recognition Based
onDeepLearning," 2018 12th International Symposium on Antennas,Propagation and EM Theory (ISAPE), 2018, pp.
1-4, doi: 10.1109/ISAPE.2018.8634348.
[5]. M. M. Zaki and S. I. Shaheen, "Sign language recognition using a combination of new vision-based features",
Pattern Recognition Letters, vol. 32, pp. 572-577, 2011.
[6]. V. I. Pavlovic, R. Sharma and T. S. Huang, "Visual interpretation of hand gestures for human-computer interaction:
a review," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 677-695, July
1997, doi: 10.1109/34.598226.
[7]. C. Dong, M. C. Leu and Z. Yin, "American Sign Language alphabet recognition using Microsoft Kinect", Proc.
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 44-52, 2015.
[8]. Y. Fang, J. Cheng, K. Wang and H. Lu, "Hand Gesture Recognition Using Fast Multi-Scale Analysis," Fourth
International Conference on Image and Graphics (ICIG 2007), 2007, pp. 694-698, doi: 10.1109/ICIG.2007.52.
[9]. S., Manjula & Krishnamurthy, Lakshmi & Ravichandran,Manjula. (2016). A STUDY ON OBJECT DETECTION.
[10]. "Online Hand Gesture Recognition Using OpenCV", International Journal of Emerging Technologies and Innovative
Research (www.jetir.org), ISSN:2349-5162, Vol.2,Issue 5, page no.1635-1637, May-2015.
[11]. Salian, Shashank, Dokare, Indu, Serai, Dhiren, Suresh, Aditya, Ganorkar, Pranav. Proposed system for sign language
recognition. IEEE 2017 International ConferenceonComputation of Power, Energy Information and Communication
(ICCPEIC) - Melmaruvathur, India (2017.3.22-2017.3.23).
[12]. Someshwar, Dipanshu, Bhanushali, Dharmik, Chaudhari, Vismay, Nadkarni, Swati. Implementation of Virtual
Assistant with Sign Language using Deep Learning and TensorFlow. IEEE 2020 Second International Conference on
Inventive Research in Computing Applications(ICIRCA)- Coimbatore, India (2020.7.15- 2020.7.17)
[13]. Dong, Cao & Leu, Ming& Yin, Zhaozheng. (2015). American Sign Language alphabet recognition using Microsoft
Kinect. 44-52. 10.1109/CVPRW.2015.7301347.
[14]. M. M. Islam, S. Siddiqua and J. Afnan, "Real time Hand Gesture Recognition using different algorithms based on
American Sign Language", Proc. IEEE International Conference on Imaging Vision Pattern Recognition (icIVPR),
pp. 1-6, 2017.
[15]. Tara, R.Y., P.I. Santosa, and T.B. Adji, Sign Language Recognition in Robot Teleoperation using Centroid Distance
Fourier Descriptors. International Journal of Computer Applications, 2012. 48(2).
[16]. C. Chuan, E. Regina and C. Guardino, "American Sign Language Recognition Using Leap Motion Sensor", Proc.
13th International Conference on Machine Learning and Applications (ICMLA), pp. 541-544, 2014.
[17]. Anup Nandy, Jay Shankar Prasad, Soumik Mondal, Pavan Chakraborty and G. C. Nandi, "Recognition of Isolated
Indian Sign Language Gesture in Real Time" in Communications in Computer and Information Science book series,
CCIS, vol. 70.
[18]. [18]. Ijteba Sultana, Dr. Mohd Abdul Bari,Dr. Sanjay,” Routing Performance Analysis of Infrastructure-less
Wireless Networks with Intermediate Bottleneck Nodes”, International Journal of Intelligent Systems and
Applications in Engineering, ISSN no: 2147-6799 IJISAE,Vol 12 issue 3, 2024, Nov 2023.
[19]. Md. Zainlabuddin, "Wearable sensor-based edge computing framework for cardiac arrhythmia detection and acute
stroke prediction”, Journal of Sensor, Volume2023.
[20]. Md. Zainlabuddin, "Security Enhancement in Data Propagation for Wireless Network”, Journal of Sensor, ISSN:
2237-0722 Vol. 11 No. 4 (2021).
[21]. Dr MD Zainlabuddin, "CLUSTER BASED MOBILITY MANAGEMENT ALGORITHMS FOR WIRELESS
MESH NETWORKS”, Journal of Research Administration, ISSN:1539-1590 | E-ISSN:2573-7104 , Vol. 5 No. 2,
(2023).
[22]. Vaishnavi Lakadaram, " Content Management of Website Using Full Stack Technologies”, Industrial Engineering
Journal, ISSN: 0970-2555 Volume 15 Issue 11 October 2022
[23]. Dr. Mohammed Abdul Bari,Arul Raj Natraj Rajgopal, Dr.P. Swetha ,” Analysing AWSDevOps CI/CD Serverless
Pipeline Lambda Function's Throughput in Relation to Other Solution”, International Journal of Intelligent Systems
and Applications in Engineering , JISAE, ISSN:2147-6799, Nov 2023, 12(4s), 519–526.
[24]. Ijteba Sultana, Mohd Abdul Bari and Sanjay,” Impact of Intermediate per Nodes on the QoS Provision in Wireless
Infrastructure less Networks”, Journal of Physics: Conference Series, Conf. Ser. 1998 012029, CONSILIO Aug 2021
[25]. M.A. Bari, Sunjay Kalkal, Shahanawaj Ahamad," A Comparative Study and Performance Analysis of Routing
Algorithms”, in 3rd International Conference ICCIDM, Springer - 978- 981-10-3874-7_3 Dec (2016)
[26]. Mohammed Rahmat Ali, BIOMETRIC: AN e-AUTHENTICATION SYSTEM TRENDS AND FUTURE
APLLICATION”, International Journal of Scientific Research in Engineering (IJSRE), Volume1, Issue 7, July 2017
[27]. Mohammed Rahmat Ali, BYOD.... A systematic approach for analyzing and visualizing the type of data and
information breaches with cyber security”, NEUROQUANTOLOGY, Volume20, Issue 15, November 2022

Page | 46
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 12, Issue 6, June-2024, Available online at: www.ijaresm.com

[28]. Mohammed Rahmat Ali, Computer Forensics -An Introduction of New Face to the Digital World, International
Journal on Recent and Innovation Trends in Computing and Communication, ISSN: 2321-8169- 453 – 456, Volume:
5 Issue: 7
[29]. Mohammed Rahmat Ali, Digital Forensics and Artificial Intelligence ...A Study, International Journal of Innovative
Science and Research Technology, ISSN:2456-2165, Volume: 5 Issue:12.
[30]. Mohammed Rahmat Ali, Usage of Technology in Small and Medium Scale Business, International Journal of
Advanced Research in Science & Technology (IJARST), ISSN:2581-9429, Volume: 7 Issue:1, July 2020.
[31]. Mohammed Rahmat Ali, Internet of Things (IOT) Basics - An Introduction to the New Digital World, International
Journal on Recent and Innovation Trends in Computing and Communication, ISSN: 2321-8169- 32-36, Volume: 5
Issue: 10
[32]. Mohammed Rahmat Ali, Internet of things (IOT) and information retrieval: an introduction, International Journal of
Engineering and Innovative Technology (IJEIT), ISSN: 2277-3754, Volume: 7 Issue: 4, October 2017.
[33]. [34]. Mohammed Rahmat Ali, How Internet of Things (IOT) Will Affect the Future - A Study, International Journal
on Future Revolution in Computer Science & Communication Engineering, ISSN: 2454-424874 – 77, Volume: 3
Issue: 10, October 2017.
[34]. Mohammed Rahmat Ali, ECO Friendly Advancements in computer Science Engineering and Technology,
International Journal on Scientific Research in Engineering (IJSRE), Volume: 1 Issue: 1, January 2017
[35]. Mr. Pathan Ahmed Khan, Dr. M.A Bari, Impact of Emergence with Robotics at Educational Institution and
Emerging Challenges”, International Journal of Multidisciplinary Engineering in Current Research (IJMEC), ISSN:
2456-4265, Volume 6, Issue 12, December 2021, Page 43-46
[36]. Shahanawaj Ahamad, Mohammed Abdul Bari, Big Data Processing Model for Smart City Design: A Systematic
Review “, VOL 2021: ISSUE 08 IS SN: 0011-9342; Design Engineering (Toronto) Elsevier SCI Oct: 021
[37]. Syed Shehriyar Ali, Mohammed Sarfaraz Shaikh, Syed Safi Uddin, Dr. Mohammed Abdul Bari, “Saas Product
Comparison and Reviews Using Nlp”, Journal of Engineering Science (JES), ISSN NO:0377-9254, Vol 13, Issue 05,
MAY/2022
[38]. Mohammed Abdul Bari, Shahanawaj Ahamad, Mohammed Rahmat Ali,” Smartphone Security and Protection
Practices”, International Journal of Engineering and Applied Computer Science (IJEACS) ; ISBN: 9798799755577
Volume: 03, Issue: 01, December 2021 (International Journal,U K) Pages 1-6
[39]. M.A. Bari& Shahanawaj Ahamad, “Managing Knowledge in Development of Agile Software”, in International
Journal of Advanced Computer Science & Applications (IJACSA), ISSN: 2156-5570, Vol: 2, No: 4, pp: 72-76, New
York, U.S.A., April 2011
[40]. Meer Tauseef Ali, Dr. Syed Asadullah Hussaini, Dr. S K Yadav” Automated Fake News Detection for societal
benefit using Hybrid Deep Neural Network”, Journal of Harbin Engineering University, Vol. 44 No. 7 (2023): Issue
7.
[41]. Sk Nishanth Anjum1 , Dr. Syed Asadullah Hussaini, “An Outdoor Wearable Assistive System Powered By CNN For
Blind”, International Journal Of Multidisciplinary Engineering In Current Research, ISSN 2456-4265, Volume 8,
Issue 9 Sep 2023.
[42]. Mohammad S. Qaseem, Dr. A. Govardhan, S. Nasira Tabassum, Syed.Asadullah Hussaini,” Data warehouse & Data
Mining logical design Implementation” , International Journal of Scientific and Engineering Research , ISSN Online
2229-5518, Volume 3, Issue 12, December 2012.
[43]. AYESHA SIDDIQUA, AYESHA FATIMA, TAHNIYATH BEGUM,Dr. SYED ASADULLAH HUSSAINI , “Ml –
Based Diabetes Foretell Using Svm & Logistic Regression in Healthcare ”.Mathematical Statistician and
Engineering Applications, ISSN: 2094-0343 , Vol. 72 No. 1 (2023).
[44]. Syed Asadullah Hussaini,Dr. S. K. Yadav ,” An approach in empherical in Medical Image Segmentation using
Deformable Models- State of the Art”, journal of engineering sciences, ISSN No 0377-9254 , Volume , Issue 2 feb
2019.
[45]. Mahaboob Sharief Shaik, Syed Asadullah Hussaini, Prasadu Peddy, “Context Aware Smart Learning: Analysis and
Research issues”, International Journal of Enhanced Research in Management & Computer Applications, ISSN:
2319-7471, Vol. 10 Issue 11, November, 2021

Page | 47

You might also like