Design of The Smart Glove To System The Visually Impaired
Design of The Smart Glove To System The Visually Impaired
Visually Impaired
ABSTRACT:
Locating objects of daily use is a strenuous task for the visually
impaired .The objective of this paper is to design a smart glove by
using Deep Neural Networks (DNN) and yolo version 5 which will
guide the hand of the visually impaired to the desired object in an
indoor environment. The smart glove has micro-vibrating motors,
used to guide the user’s hand. The palm of the glove has a Universal
Serial Bus (USB) camera which feeds the real-time video to the
system for processing .The system also has an inbuilt camera and
microphone. The user vocally commands the system to identify the
desired object .The camera then detects the object using DNN. Once
the object is tagged, object tracking begins. Based on the relative
position of the camera and the object, the micro-vibrating motors
vibrate accordingly to guide the users hand in the required direction.
An additional feature which is incorporated includes differentiation of
similar objects by color.
Keywords: Deep Neural Network of Open CV, Micro-Vibrating
Motors, Smart Glove, Visually Impaired, yolo version 5
INTRODUCTION:
The world, as we know it, is designed with sighted people in
mind. This is because vision is the most significant way of receiving
information from the surrounding environment. As a consequence of
this, a visually impaired person requires assistance for everyday
chores. It is extremely difficult for blind people to navigate in an
environment that is not habitual to them. Thus a simple task of
locating and reaching the desired object which involves basic
movements of the arm requires the blind to grope around to reach the
required object. Recent technological advancements aid the blind
when navigating in an outdoor environment. Indoors, although, is a
whole different scenario. Not much has been done which helps blind
people locate objects of everyday use in an indoor environment.
Visually impaired people have to retain the positions of the objects in
an indoor environment in their memory. If accidents are to be
prevented, objects like tables, chairs and other furniture must not be
moved without warning. All items should be in their designated
locations and the walkways have to be kept clear. Each member of the
household has to be careful regarding this. The implemented system
guides the blind to the desired object in an indoor environment. A
traditional system like white cane only allow the blind to realize that
there is an object in their vicinity but does not help them identify the
object. They have to feel the object using their hands to understand
what that object is. Using the smart glove, all one has to do is
command the glove. The glove locates the object for the person and
guides them to it using the micro-vibrating motor. Speech input and
output technology are widely used in devices which aid the blind.
Keyword extraction plays a major part in this. Analysis of the flat-
type vibration motor is important, as it shows how useful they will be
in the implementation of the glove. A limitation of white cane is that
it does not allow for object identification. This is achieved using
multiple object detection and Deep Neural Networks (DNN).Object
tracking also becomes a necessity because the glove is in continuous
motion and a framework for this has been proposed. Other navigation
aids for the visually impaired have been previously implemented, but
for obstacle avoidance only. Another design makes use of SONAR
and camera for obstacle avoidance and provides vibration and
auditory feedback. Previous systems to aid the blind make use of
image processing and ultrasonic sensors for obstacle avoidance only.
No system exists to guide the user to the object required by them.
Even the smart gloves which have been implemented previously are
used as collision avoidance systems .Our glove aims to fill this void in
the domain. The proposed implementation of this prototype solves the
problem of locating and navigating to objects in an indoor
environment. This is achieved by using micro-vibrating motor
embedded in the glove. Unlike the systems designed previously
(which were used only for obstacle avoidance) to assist the blind, the
smart glove enables the visually impaired in locating and navigating
to the required object using the micro-vibrating motor and Speaker.
LITERATURE SURVEY:
Literature survey is a crucial stage in project life cycle;
therefore, its importance cannot be underestimated. The
information collected through websites is properly analyzed to
clearly understand the requirements. The purpose of this
literature survey is to derive a new solution by understanding
the failing and inadequacies of the present system. The survey
is carried out in the initial stages of the work and the need of
this application is determined. This chapter contains the study
of different previous technologies and draw backs of the
previous technologies. Comparison between the previous
technology and the technology adopted in the work,
comparison between previous designs and proposed design is
also included in this chapter.
“Speech-to-text and speech-to-speech summarization of
spontaneous speech”
This paper present technique based on speech unit extraction
and concatenation. For the former case, a two stage
summarization method consisting of important sentence
extraction and word based sentence compaction is
investigated. This method are applied to the summarization of
unrestricted-domain spontaneous presentation and evaluated
by objective and subjective measures. It was confirmed that
proposed methods are effective in spontaneous speech
summarization[1]. “Smart Cane for Blinds”
This paper present an intelligent shared control system applied
to a smart cane. The smart cane is a mobile platform that aids
the visually impaired person to navigate through obstacle
environment. It consists of two independently wheels that can
be steered to follow the control commands. The intelligent
shared control system consists of three basic control modules
and a decision maker to select which action to be applied. The
first basic module is a continuous fuzzy controller to travel the
cane from a starting point to end point. The second basic one I
a discrete event controller to avoid obstacles while walking
based on the measured data from the sensor. The third basic
module is the discrete commend from the user, who provides
the command like Go Straight, Turn left, Turn Right and Stop.
The human interacts with cane through a small joystick,
which can be pushed any of the above four directions. All
these modules are shared through decision-maker module to
select one action only at any instant of real time[2].
“Smart Gloves for Blinds”
This paper present smart glove is to help blind people walk
and estimate the distance from obstacles. Ultrasonic sensor
was very sensitive and will trigger faster when it detect
obstacles The limitation of this project was the ultrasonic
sensor used can only detect the obstacles but cannot illustrate
the shape of the obstacles. For the hardware, ultrasonic
sensors will be used as a sensor to detect obstacles at the front
and it will send signal to Arduino UNO which act as
microcontroller. The microcontroller will then process data
and send the signal to Servo Motor which will guide through
its vibrating feedback. For the software, the design of the
circuit is done using Fritzing software and the program will be
done using Arduino software by installing through Arduino
library[3].
“Analysis of flat-type vibration motor for mobile phone”
In order to simulate the motion transient characteristics, we
have developed a new method of calculating the torque of the
flat-type vibration motor which is using two-dimensional
finite element model. We have measured the load torque of
the vibration motor, which is used for the motion transient
analysis. The simulated vibration characteristic is compared
with experimental value[4].
“Smart walking cane for the visually challenged”
Traditionally, visually challenged individuals employ the
white cane to aid their mobility outdoors, which provides very
limited utility. In order to improve the safety of visually
challenged users and enhance their awareness of their
surroundings while navigating in outdoor environments, a
smart device is needed. In this paper, a smart walking cane for
the visually challenged has been presented. The proposed
device can detect obstacles as well as terrain changes in the
user's path. A conventional walking cane forms the main
frame of the device, upon which ultrasonic sensors are
mounted at appropriate locations to detect obstacles, steps and
pits in the path of the user. Additionally, a provision to
indicate the presence of puddles and slippery surfaces in the
user's path has also been included in the device. The presence
of these obstacles is notified to the user by the means of voice
recordings played via earphones or through haptic feedback,
provided using vibration motors placed on the hand support of
the stick. The smart walking cane further employs GPS and
GSM modules which can be used to send a distress signal to
the user's kin along with the user's location upon being
activated by a simple press of a button. The device is
lightweight and is powered by a rechargeable battery. The
overall design of the device ensures accuracy, energy
efficiency and easy portability[5].
EXISTING SYSTEM:
The paper presents an intelligent shared control applied
to a smart cane. The cane aids visual impaired persons to
move in a safe manner. It can be used in two modes
depending on the environment. A self-autonomy behavior is
added to assure the user safety while walking in obstacle
environments. Independent basic control modules are
developed to perform goal seeking, obstacle avoidance and
human intervention. The final control action is selected in
applying decision-maker rules. The obtained results in
simulation affirmed the proposed framework for the shared
control methodology. These results encouraged us to
implement the real time application in the future using the
embedded controller (micro-controller). The system developed
in this work proves that object detection can be deployed in
different platforms as needed. This system is a good match of
the need to have surveillance cameras with object detection
notification. An example is detecting firearms or animals in
certain organizations or institutions. The strength of this
system is that it can be trained for any type of object to be
detected for different situations. An extension to this work
would be to adapt the system to a low cost card and adapt it to
the card architecture in order to get better performances. The
next steps of this work will be to enhance the embedded
platform performance. This enhancement can be achieved
through the usage of parallelism. We can get several
processors to be run simultaneously separate tasks in order to
enhance performance and response time. The existing
application deals with real time systems implementation and
the results give an indication of where the cases of object
detection applications may be more complex and where it
may be simpler.
PROPOSED SYSTEM:
In this proposed system, when the blind people tell the object
name is send to the micro controller .To detect object from the
input then vibrate motor is vibrate the object direction. Guide
the blind people the object direction. This smart glove is
designed to aid the blind in locating the desired object. The
whole system functions as an independent stand-alone unit.
The system is analyzed thoroughly in order to minimize
power consumption as the system is running on battery power.
The stand-alone unit comprises of the micro controller which
has a Universal Serial Bus (USB) camera with a built-in
microphone. The whole process of the proposed solution
starts with the user vocally communicating to the system
about the object being looked for. The audio input from the
user received by the microphone connected to the
microcontroller is converted into text, using the “yolo
version5” – speech to text module for python. The name of
the object needed by the user is extracted from the vocal
command using keyword extraction technique. This extracted
keyword is passed to the object detection algorithm running
DNN. The DNN used for the smart glove is implemented
using the Caffe framework. The DNN running on
microcontroller processes the real-time video and locates the
required object and tags it. The object tracking algorithm then
takes control to improve the frame rate. The object, once
located, must always remain in the center of the frame relative
to the glove. If it deviates from the center due to the motion of
the glove, the micro-vibrating motor guide the user to move
his hand in such a way that the object is brought back to the
center of the frame. As long as the object is in the center, the
micro vibrating motor keeps vibrating to guide the user
forward. The micro-vibrating motors are connected to the
microcontroller
The microcontroller Model B is used for object detection
using the DNN module of yolo version 5. DNN is a class of
machine learning algorithm. It uses a series of multiple layers
of non-linear processing units. Each subsequent layer uses the
previous layer as an input. The system camera module can be
used to take high-definition video. It can be used for image
classification and pattern recognition. This single board
computer has wireless LAN making it the ideal solution for
power-intense applications. The microcontroller is where
processing occurs, i.e., speech recognition, object detection,
object tracking and controls the micro-vibrating motors. The
entire system can be powered using any standard power bank
or Li-Po (Lithium Polymer) batteries which have a power
rating that is sufficient for the microcontroller.
BLOCK DAIGRAM:
FLOW CHART:
MODULES:
1. Image training
2. Model creation
3. Object direction analysis
4. Speech to text
5. Python and Arduino integration
6. Motor operating module
ARCHITECTURE
SYSTEM ENVIRONMENT:
Python is a general-purpose interpreted, interactive, object-
oriented, and high-level programming language. It was
created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General
Public License (GPL). This tutorial gives enough
understanding on Python programming language.
Why to Learn Python?
Python is a high-level, interpreted, interactive and object-
oriented scripting language. Python is designed to be highly
readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical
constructions than other languages.
Python is a MUST for students and working professionals to
become a great Software Engineer specially when they are
working in Web Development Domain. I will list down some
of the key advantages of learning Python:
Python is Interpreted − Python is processed at runtime
by the interpreter. You do not need to compile your
program before executing it. This is similar to PERL and
PHP.
Python is Interactive − You can actually sit at a Python
prompt and interact with the interpreter directly to write
your programs.
Python is Object-Oriented − Python supports Object-
Oriented style or technique of programming that
encapsulates code within objects.
Python is a Beginner's Language − Python is a great
language for the beginner-level programmers and
supports the development of a wide range of applications
from simple text processing to WWW browsers to
games.
Characteristics of Python
Following are important characteristics of Python
Programming −
It supports functional and structured programming
methods as well as OOP.
It can be used as a scripting language or can be compiled
to byte-code for building large applications.
It provides very high-level dynamic data types and
supports dynamic type checking.
It supports automatic garbage collection.
It can be easily integrated with C, C++, COM, ActiveX,
CORBA, and Java.
Hello World using Python.
Just to give you a little excitement about Python, I'm going to
give you a small conventional Python Hello World program,
You can try it using Demo link.
Applications of Python
As mentioned before, Python is one of the most widely used
language over the web. I'm going to list few of them here:
Easy-to-learn − Python has few keywords, simple
structure, and a clearly defined syntax. This allows the
student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and
visible to the eyes.
Easy-to-maintain − Python's source code is fairly easy-
to-maintain.
A broad standard library − Python's bulk of the library
is very portable and cross-platform compatible on UNIX,
Windows, and Macintosh.
Interactive Mode − Python has support for an
interactive mode which allows interactive testing and
debugging of snippets of code.
Portable − Python can run on a wide variety of hardware
platforms and has the same interface on all platforms.
Extendable − You can add low-level modules to the
Python interpreter. These modules enable programmers
to add to or customize their tools to be more efficient.
Databases − Python provides interfaces to all major
commercial databases.
GUI Programming − Python supports GUI applications
that can be created and ported to many system calls,
libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support
for large programs than shell scripting.
Python is a high-level, interpreted, interactive and object-
oriented scripting language. Python is designed to be highly
readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical
constructions than other languages.
Python is Interpreted − Python is processed at runtime
by the interpreter. You do not need to compile your
program before executing it. This is similar to PERL and
PHP.
Python is Interactive − You can actually sit at a Python
prompt and interact with the interpreter directly to write
your programs.
Python is Object-Oriented − Python supports Object-
Oriented style or technique of programming that
encapsulates code within objects.
Python is a Beginner's Language − Python is a great
language for the beginner-level programmers and
supports the development of a wide range of applications
from simple text processing to WWW browsers to
games.
History of Python
Python was developed by Guido van Rossum in the late
eighties and early nineties at the National Research Institute
for Mathematics and Computer Science in the Netherlands.
Python is derived from many other languages, including ABC,
Modula-3, C, C++, Algol-68, SmallTalk, and Unix shell and
other scripting languages.
Python is copyrighted. Like Perl, Python source code is now
available under the GNU General Public License (GPL).
Python is now maintained by a core development team at the
institute, although Guido van Rossum still holds a vital role in
directing its progress.
Python Features
Python's features include −
Easy-to-learn − Python has few keywords, simple
structure, and a clearly defined syntax. This allows the
student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and
visible to the eyes.
Easy-to-maintain − Python's source code is fairly easy-
to-maintain.
A broad standard library − Python's bulk of the library
is very portable and cross-platform compatible on UNIX,
Windows, and Macintosh.
Interactive Mode − Python has support for an
interactive mode which allows interactive testing and
debugging of snippets of code.
Portable − Python can run on a wide variety of hardware
platforms and has the same interface on all platforms.
Extendable − You can add low-level modules to the
Python interpreter. These modules enable programmers
to add to or customize their tools to be more efficient.
Databases − Python provides interfaces to all major
commercial databases.
GUI Programming − Python supports GUI applications
that can be created and ported to many system calls,
libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
Scalable − Python provides a better structure and support
for large programs than shell scripting.
Apart from the above-mentioned features, Python has a big
list of good features, few are listed below −
It supports functional and structured programming
methods as well as OOP.
It can be used as a scripting language or can be compiled
to byte-code for building large applications.
It provides very high-level dynamic data types and
supports dynamic type checking.
It supports automatic garbage collection.
It can be easily integrated with C, C++, COM, ActiveX,
CORBA, and Java.
Variables are nothing but reserved memory locations to
store values. This means that when you create a variable
you reserve some space in memory.
Based on the data type of a variable, the interpreter
allocates memory and decides what can be stored in the
reserved memory. Therefore, by assigning different data
types to variables, you can store integers, decimals or
characters in these variables.
Assigning Values to Variables
Python variables do not need explicit declaration to
reserve memory space. The declaration happens
automatically when you assign a value to a variable. The
equal sign (=) is used to assign values to variables.
The operand to the left of the = operator is the name of
the variable and the operand to the right of the = operator
is the value stored in the variable. For example −
A module allows you to logically organize your Python
code. Grouping related code into a module makes the
code easier to understand and use. A module is a Python
object with arbitrarily named attributes that you can bind
and reference.
Simply, a module is a file consisting of Python code. A
module can define functions, classes and variables. A
module can also include runnable code.
Example
The Python code for a module named aname normally
resides in a file named aname.py. Here's an example of a
simple module, support.py
Python has been an object-oriented language since it existed.
Because of this, creating and using classes and objects are
downright easy. This chapter helps you become an expert in
using Python's object-oriented programming support.
If you do not have any previous experience with object-
oriented (OO) programming, you may want to consult an
introductory course on it or at least a tutorial of some sort so
that you have a grasp of the basic concepts.
However, here is small introduction of Object-Oriented
Programming (OOP) to bring you at speed −
Overview of OOP Terminology
Class − A user-defined prototype for an object that
defines a set of attributes that characterize any object of
the class. The attributes are data members (class
variables and instance variables) and methods, accessed
via dot notation.
Class variable − A variable that is shared by all instances
of a class. Class variables are defined within a class but
outside any of the class's methods. Class variables are not
used as frequently as instance variables are.
Data member − A class variable or instance variable that
holds data associated with a class and its objects.
Function overloading − The assignment of more than one
behavior to a particular function. The operation
performed varies by the types of objects or arguments
involved.
Instance variable − A variable that is defined inside a
method and belongs only to the current instance of a
class.
Inheritance − The transfer of the characteristics of a
class to other classes that are derived from it.
Instance − An individual object of a certain class. An
object obj that belongs to a class Circle, for example, is
an instance of the class Circle.
Instantiation − The creation of an instance of a class.
Method − A special kind of function that is defined in a
class definition.
Object − A unique instance of a data structure that's
defined by its class. An object comprises both data
members (class variables and instance variables) and
methods.
Operator overloading − The assignment of more than
one function to a particular operator.
Creating Classes
The class statement creates a new class definition. The name
of the class immediately follows the keyword class followed
by a colon as follows −
class ClassName:
'Optional class documentation string'
class_suite
The class has a documentation string, which can be
accessed via ClassName.__doc__.
The class_suite consists of all the component statements
defining class members, data attributes and functions.
Example
Following is the example of a simple Python class −
class Employee:
'Common base class for all employees'
empCount = 0
def displayCount(self):
print "Total Employee %d" % Employee.empCount
def displayEmployee(self):
print "Name : ", self.name, ", Salary: ", self.salary
The variable empCount is a class variable whose value is
shared among all instances of a this class. This can be
accessed as Employee.empCount from inside the class or
outside the class.
The first method __init__() is a special method, which is
called class constructor or initialization method that
Python calls when you create a new instance of this
class.
You declare other class methods like normal functions
with the exception that the first argument to each method
is self. Python adds the self argument to the list for you;
you do not need to include it when you call the methods.
Creating Instance Objects
To create instances of a class, you call the class using class
name and pass in whatever arguments its __init__ method
accepts.
"This would create first object of Employee class"
emp1 = Employee("Zara", 2000)
"This would create second object of Employee class"
emp2 = Employee("Manni", 5000)
Accessing Attributes
You access the object's attributes using the dot operator with
object. Class variable would be accessed using class name as
follows −
emp1.displayEmployee()
emp2.displayEmployee()
print "Total Employee %d" % Employee.empCount
The Python standard for database interfaces is the Python DB-
API. Most Python database interfaces adhere to this standard.
You can choose the right database for your application.
Python Database API supports a wide range of database
servers such as −
GadFly
mSQL
MySQL
PostgreSQL
Microsoft SQL Server 2000
Informix
Interbase
Oracle
Sybase
Here is the list of available Python database interfaces: Python
Database Interfaces and APIs. You must download a separate
DB API module for each database you need to access. For
example, if you need to access an Oracle database as well as a
MySQL database, you must download both the Oracle and the
MySQL database modules.
The DB API provides a minimal standard for working with
databases using Python structures and syntax wherever
possible. This API includes the following −
Importing the API module.
Acquiring a connection with the database.
Issuing SQL statements and stored procedures.
Closing the connection
We would learn all the concepts using MySQL, so let us talk
about MySQLdb module.
What is MySQLdb?
MySQLdb is an interface for connecting to a MySQL
database server from Python. It implements the Python
Database API v2.0 and is built on top of the MySQL C API.
import MySQLdb
If it produces the following result, then it means MySQLdb
module is not installed −
Traceback (most recent call last):
File "test.py", line 3, in <module>
import MySQLdb
ImportError: No module named MySQLdb
To install MySQLdb module, use the following command −
For Ubuntu, use the following command -
$ sudo apt-get install python-pip python-dev libmysqlclient-
dev
For Fedora, use the following command -
$ sudo dnf install python python-devel mysql-devel redhat-
rpm-config gcc
For Python command prompt, use the following command -
pip install MySQL-python
Note − Make sure you have root privilege to install above
module.
Database Connection
Before connecting to a MySQL database, make sure of the
followings −
You have created a database TESTDB.
You have created a table EMPLOYEE in TESTDB.
This table has fields FIRST_NAME, LAST_NAME,
AGE, SEX and INCOME.
User ID "testuser" and password "test123" are set to
access TESTDB.
Python module MySQLdb is installed properly on your
machine.
You have gone through MySQL tutorial to understand
MySQL Basics
Python provides various options for developing graphical user
interfaces (GUIs). Most important are listed below.
Tkinter − Tkinter is the Python interface to the Tk GUI
toolkit shipped with Python. We would look this option
in this chapter.
wxPython − This is an open-source Python interface for
wxWindows http://wxpython.org.
JPython − JPython is a Python port for Java which
gives Python scripts seamless access to Java class
libraries on the local machine http://www.jython.org.
There are many other interfaces available, which you can find
them on the net.
Tkinter Programming
Tkinter is the standard GUI library for Python. Python when
combined with Tkinter provides a fast and easy way to create
GUI applications. Tkinter provides a powerful object-oriented
interface to the Tk GUI toolkit.
Creating a GUI application using Tkinter is an easy task. All
you need to do is perform the following steps −
Import the Tkinter module.
Create the GUI application main window.
Add one or more of the above-mentioned widgets to the
GUI application.
Enter the main event loop to take action against each
event triggered by the user.
Example
#!/usr/bin/python
import Tkinter
top = Tkinter.Tk()
# Code to add widgets will go here...
top.mainloop()
This would create a following window −
Tkinter Widgets
Tkinter provides various controls, such as buttons, labels and
text boxes used in a GUI application. These controls are
commonly called widgets.
There are currently 15 types of widgets in Tkinter. We present
these widgets as well as a brief description in the following
table −
Frame
5 The Frame widget is used as a container widget to
organize other widgets.
Label
6 The Label widget is used to provide a single-line
caption for other widgets. It can also contain images.
Listbox
7 The Listbox widget is used to provide a list of options
to a user.
Menubutton
8 The Menubutton widget is used to display menus in
your application.
Menu
9 The Menu widget is used to provide various
commands to a user. These commands are contained
inside Menubutton.
Message
10 The Message widget is used to display multiline text
fields for accepting values from a user.
Radiobutton
PROCESSOR
Processor is the heart of an embedded system. It is the basic
unit that takes inputs and produces an output after processing
the data. For an embedded system designer, it is necessary to
have the knowledge of both microprocessors and
microcontrollers.
Processors in a System
A processor has two essential units −
Program Flow Control Unit (CU)
Execution Unit (EU)
o Microcontroller
o Embedded Processor
o Media Processor
Microcontroller
A microcontroller is a single-chip VLSI unit (also
called microcomputer) which, although having limited
computational capabilities, possesses enhanced input/output
capability and a number of on-chip functional units.
CPU RAM ROM
Microprocessor Microcontroller
RAM, ROM, I/O Ports, and RAM, ROM, I/O Ports, and
Timers can be added externally Timers cannot be added
and can vary in numbers. externally. These components
are to be embedded together on
a chip and are fixed in numbers.
Compiler
A compiler is a computer program (or a set of programs) that
transforms the source code written in a programming
language (the source language) into another computer
language (normally binary format). The most common reason
for conversion is to create an executable program. The name
"compiler" is primarily used for programs that translate the
source code from a highlevel programming language to a
low-level language (e.g., assembly language or machine
code).
Cross-Compiler
If the compiled program can run on a computer having
different CPU or operating system than the computer on
which the compiler compiled the program, then that compiler
is known as a cross-compiler.
Decompiler
A program that can translate a program from a low-level
language to a high-level language is called a decompiler.
Language Converter
A program that translates programs written in different high-
level languages is normally called a language translator,
source to source translator, or language converter.
A compiler is likely to perform the following operations −
Preprocessing
Parsing
Code generation
Code optimization
Assemblers
An assembler is a program that takes basic computer
instructions (called as assembly language) and converts them
into a pattern of bits that the computer's processor can use to
perform its basic operations. An assembler creates object
code by translating assembly instruction mnemonics into
opcodes, resolving symbolic names to memory locations.
Assembly language uses a mnemonic to represent each low-
level machine operation (opcode).
Debugging Tools in an Embedded System
Debugging is a methodical process to find and reduce the
number of bugs in a computer program or a piece of
electronic hardware, so that it works as expected. Debugging
is difficult when subsystems are tightly coupled, because a
small change in one subsystem can create bugs in another.
The debugging tools used in embedded systems differ greatly
in terms of their development time and debugging features.
We will discuss here the following debugging tools −
Simulators
Microcontroller starter kits
Emulator
Simulators
Code is tested for the MCU / system by simulating it on the
host computer used for code development. Simulators try to
model the behavior of the complete microcontroller in
software.
Functions of Simulators
A simulator performs the following functions −
Defines the processor or processing device family as
well as its various versions for the target system.
Monitors the detailed information of a source code part
with labels and symbolic arguments as the execution
goes on for each single step.
Provides the status of RAM and simulated ports of the
target system for each single step execution.
Monitors system response and determines throughput.
Provides trace of the output of contents of program
counter versus the processor registers.
Provides the detailed meaning of the present command.
Monitors the detailed information of the simulator
commands as these are entered from the keyboard or
selected from the menu.
Supports the conditions (up to 8 or 16 or 32 conditions)
and unconditional breakpoints.
Provides breakpoints and the trace which are together the
important testing and debugging tool.
Facilitates synchronizing the internal peripherals and
delays.
Microcontroller Starter Kit
A microcontroller starter kit consists of −
Hardware board (Evaluation board)
In-system programmer
Some software tools like compiler, assembler, linker, etc.
version of a compiler.
A big advantage of these kits over simulators is that they
work in real-time and thus allow for easy input/output
functionality verification. Starter kits, however, are
completely sufficient and the cheapest option to develop
simple microcontroller projects.
Emulators
An emulator is a hardware kit or a software program or can
be both which emulates the functions of one computer system
(the guest) in another computer system (the host), different
from the first one, so that the emulated behavior closely
resembles the behavior of the real system (the guest).
Emulation refers to the ability of a computer program in an
electronic device to emulate (imitate) another program or
device. Emulation focuses on recreating an original computer
environment. Emulators have the ability to maintain a closer
connection to the authenticity of the digital object. An
emulator helps the user to work on any kind of application or
operating system on a platform in a similar way as the
software runs as in its original environment.
Peripheral Devices in Embedded Systems
Embedded systems communicate with the outside world via
their peripherals, such as following &mins;
Serial Communication Interfaces (SCI) like RS-232, RS-
422, RS-485, etc.
Synchronous Serial Communication Interface like I2C,
SPI, SSC, and ESSI
Universal Serial Bus (USB)
Multi Media Cards (SD Cards, Compact Flash, etc.)
DP9 ports
Criteria for Choosing Microcontroller
While choosing a microcontroller, make sure it meets the task
at hand and that it is cost effective. We must see whether an
8-bit, 16-bit or 32-bit microcontroller can best handle the
computing needs of a task. In addition, the following points
should be kept in mind while choosing a microcontroller −
Speed − What is the highest speed the microcontroller
can support?
Packaging − Is it 40-pin DIP (Dual-inline-package) or
QFP (Quad flat package)? This is important in terms of
space, assembling, and prototyping the end-product.
Power Consumption − This is an important criteria for
battery-powered products.
Amount of RAM and ROM on the chip.
Count of I/O pins and Timers on the chip.
Cost per Unit − This is important in terms of final cost
of the product in which the microcontroller is to be used.
Further, make sure you have tools such as compilers,
debuggers, and assemblers, available with the
microcontroller. The most important of all, you should
purchase a microcontroller from a reliable source.
ALGORITHM:
DEEP NEURAL NETWORK ALGORITHM
What is a Deep Neural Network?
Let’s begin by understanding its definition and its basics.
A neural network consists of several connected units called
nodes. These are the smallest part of the neural network and
act as the neurons in the human brain. When a neuron receives
a signal, it triggers a process. The signal is passed from one
neuron to another based on input received. A complex
network is formed that learns from feedback.
The nodes are grouped into layers. A task is solved by
processing the various layers between the input and output
layers. The greater the number of layers to be processed, the
deeper the network, therefore the term, deep learning.
CAP (Credit Assignment Path) sheds light on the number of
layers required to solve a problem. When the CAP index is
more than two then the neural network is considered as deep.
Deep Neural Network applications are very efficient and
useful in real-life scenarios. Deep Neural Network AI-based
robots like Alpha 2 can speak, execute voice commands, write
messages, etc.
2. DIFFERENCE BETWEEN NEURAL NETWORK AND DEEP
NEURAL NETWORK
The Deep Neural Network is more creative and complicated
than the neural network. Deep Neural Network algorithms can
recognize sounds and voice commands, make predictions,
think creatively, and do analysis. They act like the human
brain.
Neural networks give one result. It can be an action, a word,
or a solution. On the other hand, Deep Neural Networks
provides solutions by globally solving problems based on the
information given.
Specific data input and algorithm is required for a neural
network, whereas Deep Neural Networks are capable of
solving problems without a specific data amount.
Activation Function and Weight
The small function performed by neurons of every layer is
called the activation function. It mimics the signal that has to
pass to the next connected neurons. When the result of the
incoming neuron is greater than the threshold then the output
is passed; otherwise, it is ignored.
The influence of input on the next neuron and the overall
output is known as weight.
3. TYPES OF DEEP NEURAL NETWORK
Some of the types are:
ANN: Artificial Neural Networks
CNN: Convolution Neural Networks
RNN: Recurrent Neural Networks
These Deep Neural Networks mostly act as the base for the
pre-trained models in deep learning. The ANN is a deep feed-
forward neural network as it processes inputs in the forward
direction only. Artificial Neural Networks are capable of
learning non-linear functions. The activation function of
ANNs helps in learning any complex relationship between
input and output.
RNN is designed to overcome the looping constraint of ANN
in hidden layers. Deep recurrent networks are capable of
solving problems related to audio data, text data, and time-
series data. Recurrent neural networks capture sequential
information available in the input data. RNN works on
parameter sharing.
The CNN based models in Deep Neural Networks are used in
video and image processing. Filters or kernels are the building
blocks of CNN. By using conventional operations kernels
extract relevant and correct features from the input data.
There are a number of other algorithms that are quite popular
these days. Some of them are:
LSTMS: Long Short-Term Memory Networks
MLPs: Multilayer Perceptrons
RBFNs: Radial Basis Function Networks
DBNs: Deep Belief Networks
GANs: Generative Adversarial Networks
RBMs: Restricted Boltzmann Machines
SOMs: Self Organizing Maps
Autoencoders
Frameworks
The deep learning framework offers reusable code blocks.
These reusable code blocks provide various modules and help
in abstracting the logical blocks. The modules are handy and
can be easily used in the development of any deep learning
model.
For better understanding, the frameworks can be classified
into:
Low-Level Frameworks
High-Level Frameworks
Low-Level Frameworks
Low-level frameworks give the basic abstraction block. They
are flexible and can be customized as per the requirement.
Some of the popular learning frameworks are as follows:
MxNet
TensorFlow
PyTorch
High-Level Frameworks
High-level frameworks simplify the work by aggregating the
abstraction further. However, high-level frameworks do not
offer much flexibility and customization. Low-level
frameworks are the backend of high-level frameworks. The
source is converted into a required low-level framework
before executing them.
Some of the popular learning frameworks are as follows:
Gluon
Keras
Where MxNet and TensorFlow are used as backends
respectively.
PyTorch by Facebook and TensorFlow by Google are popular
learning frameworks. The learning curve of PyTorch is easy
as compared to TensorFlow. Hence, it is preferred for
producing a Deep Neural Network learning model and
prototyping.
Keras is another famous framework due to its ability to
provide quick learning model prototyping.
It is recommended to study and do some research before
finalizing any Deep Neural Network learning framework.
YOLO version5 (you only look once):
YOLO is an algorithm that uses neural networks to provide
real-time object detection. This algorithm is popular because
of its speed and accuracy. It has been used in various
applications to detect traffic signals, people, parking meters,
and animals.
This article introduces readers to the YOLO algorithm for
object detection and explains how it works. It also highlights
some of its real-life applications.
Introduction to object detection
Object detection is a phenomenon in computer vision that
involves the detection of various objects in digital images or
videos. Some of the objects detected include people, cars,
chairs, stones, buildings, and animals.
This phenomenon seeks to answer two basic questions:
1. What is the object? This question seeks to identify the
object in a specific image.
2. Where is it? This question seeks to establish the exact
location of the object within the image.
Object detection consists of various approaches such as fast
R-CNN, Retina-Net, and Single-Shot MultiBox Detector
(SSD). Although these approaches have solved the challenges
of data limitation and modeling in object detection, they are
not able to detect objects in a single algorithm run. YOLO
algorithm has gained popularity because of its superior
performance over the aforementioned object detection
techniques.
What is YOLO?
YOLO is an abbreviation for the term ‘You Only Look Once’.
This is an algorithm that detects and recognizes various
objects in a picture (in real-time). Object detection in YOLO
is done as a regression problem and provides the class
probabilities of the detected images.
YOLO algorithm employs convolutional neural networks
(CNN) to detect objects in real-time. As the name suggests,
the algorithm requires only a single forward propagation
through a neural network to detect objects.
This means that prediction in the entire image is done in a
single algorithm run. The CNN is used to predict various class
probabilities and bounding boxes simultaneously.
The YOLO algorithm consists of various variants. Some of
the common ones include tiny YOLO and YOLOv3.
Why the YOLO algorithm is important
YOLO algorithm is important because of the following
reasons:
Speed: This algorithm improves the speed of detection
because it can predict objects in real-time.
High accuracy: YOLO is a predictive technique that
provides accurate results with minimal background
errors.
Learning capabilities: The algorithm has excellent
learning capabilities that enable it to learn the
representations of objects and apply them in object
detection.
Image Source
YOLO uses a single bounding box regression to predict the
height, width, center, and class of objects. In the image above,
represents the probability of an object appearing in the
bounding box.
Intersection over union (IOU)
Intersection over union (IOU) is a phenomenon in object
detection that describes how boxes overlap. YOLO uses IOU
to provide an output box that surrounds the objects perfectly.
Each grid cell is responsible for predicting the bounding
boxes and their confidence scores. The IOU is equal to 1 if the
predicted bounding box is the same as the real box. This
mechanism eliminates bounding boxes that are not equal to
the real box.
The following image provides a simple example of how IOU
works.
Image Source
In the image above, there are two bounding boxes, one in
green and the other one in blue. The blue box is the predicted
box while the green box is the real box. YOLO ensures that
the two bounding boxes are equal.
Combination of the three techniques
The following image shows how the three techniques are
applied to produce the final detection results.
Image Source
First, the image is divided into grid cells. Each grid cell
forecasts B bounding boxes and provides their confidence
scores. The cells predict the class probabilities to establish the
class of each object.
For example, we can notice at least three classes of objects: a
car, a dog, and a bicycle. All the predictions are made
simultaneously using a single convolutional neural network.
Intersection over union ensures that the predicted bounding
boxes are equal to the real boxes of the objects. This
phenomenon eliminates unnecessary bounding boxes that do
not meet the characteristics of the objects (like height and
width). The final detection will consist of unique bounding
boxes that fit the objects perfectly.
For example, the car is surrounded by the pink bounding box
while the bicycle is surrounded by the yellow bounding box.
The dog has been highlighted using the blue bounding box.
Applications of YOLO
YOLO algorithm can be applied in the following fields:
Autonomous driving: YOLO algorithm can be used in
autonomous cars to detect objects around cars such as
vehicles, people, and parking signals. Object detection in
autonomous cars is done to avoid collision since no
human driver is controlling the car.
Wildlife: This algorithm is used to detect various types
of animals in forests. This type of detection is used by
wildlife rangers and journalists to identify animals in
videos (both recorded and real-time) and images. Some
of the animals that can be detected include giraffes,
elephants, and bears.
Security: YOLO can also be used in security systems to
enforce security in an area. Let’s assume that people
have been restricted from passing through a certain area
for security reasons. If someone passes through the
restricted area, the YOLO algorithm will detect him/her,
which will require the security personnel to take further
action.
How Object Detection works
Object detection can be performed using either traditional (1)
image processing techniques or modern (2) deep learning
networks.
1. Image processing techniques generally don’t require
historical data for training and are unsupervised in
nature.
o Pro’s: Hence, those tasks do not require annotated
images, where humans labeled data manually (for
supervised training).
o Con’s: These techniques are restricted to multiple
factors, such as complex scenarios (without unicolor
background), occlusion (partially hidden objects),
illumination and shadows, and clutter effect.
2. Deep Learning methods generally depend on
supervised training. The performance is limited by the
computation power of GPUs that is rapidly increasing
year by year.
o Pro’s: Deep learning object detection is
significantly more robust to occlusion, complex
scenes, and challenging illumination.
o Con’s: A huge amount of training data is required;
the process of image annotation is labor-intensive
and expensive. For example, labeling 500’000
images to train a custom DL object detection
algorithm is considered a small dataset. However,
many benchmark datasets (MS COCO, Caltech,
KITTI, PASCAL VOC, V5) provide the availability
of labeled data.
SYSTEM SPECIFICATION:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
Operating system : Windows XP/7.
Coding Language : Python , Embedded system
Tool : Tensor Flow
CONCLUSION:
In this project we have illustrated the design of the smart
glove which is effective in guiding the blind to the object that
is desired in an indoor environment. It will provide the more
reliable, efficient, easy to use and light weight solution to
user. This will responsible to create meaning to lives of
Visually impaired people Future work could include
integrating the smart glove with the smart cane for obstacle
avoidance as well. Also, custom models can be trained to
identify other desired objects. Micro-controllers/processors
with high performance Graphics Processing Unit (GPU) can
be used To enable the DNN to process the video frames
efficiently. Additionally, a camera with a depth sensor can be
used to measure the distance to the required object.
REFERENCE:
1. Furui, S., Kikuchi, T., Shinnaka, Y., & Hori, C. (2004).
“Speech-to-text and speech-to-speech summarization of
spontaneous speech”. IEEE Transactions on Speech and
Audio Processing, 12(4), 401-408.
2. Gharieb, W., & Nagib, G. (2016). “Smart Cane for Blinds”.
In Proc. 9th Int. Conf. on AI Applications (pp. 253-262).
3. Arsh.A.Ghate,Vishal.G.Chavan2(2017).“Smart Gloves for
Blind”. International Research Journal of Engineering and
Technology, (Vol.04).
4. Won, S. H., & Lee, J. (2005). “Analysis of flat-type
vibration motor for mobile phone”. IEEE transactions on
magnetics, 41(10), 4018-4020.
5. Murali, S., Shrivatsan, R., Sreenivas, V., Vijjappu, S.,
Gladwin, S. J., & Rajavel, R. (2016, December). “Smart
walking cane for the visually challenged”. In Humanitarian
Technology Conference (R10-HTC), 2016 IEEE Region 10
(pp. 1-4). IEEE.
6. Guennouni, S., Ahaitouf, A., & Mansouri, A. (2014,
October). “Multiple object detection using OpenCV on an
embedded platform”. In Information Science and Technology
(CIST), 2014 Third IEEE International Colloquium in (pp.
374-377). IEEE.
7. Huang Yi, Duan Xiusheng, Chen Zhigang, Sun Shiyu. “A
study on Deep Neural Networks framework”. In Proceedings
of the IEEE conference on computer vision and pattern
recognition (pp. 770-778).
8. Bernieri, G., Faramondi, L., & Pascucci, F. (2015, June).
“A low-cost smart glove for visually impaired people
mobility”. In Control and Automation (MED), 2015 23rd
Mediterranean Conference on (pp. 130-135). IEEE.
9. Peter H. Aigrer and Brenan J. McCarragher, "Shared
Control Framework: Applied to a Robotic Aid for the Blind",
IEEE Journal Control Systems, Vol.19, no2, pp.40-46, April
1999.
10. S. Vaibhav et al., "‘Smart’ Cane for the Visually Impaired;
Design and Controlled Field Testing of an Affordable
Obstacle Detection System", TRANSED 2010; 12th
InternationalConference on Mobility and Transport for
Elderly and Disabled Persons, 2010
11. A. G. Gaikwad and H. K. Waghmare, "Smart Cane
Indicating a Safe free Path to Blind People Using Ultrasonic
Sensor", International Journal on Recent and Innovation
Trends in Computing and Communication, vol. 4, no. 2, pp.
179-183, Feb. 2016.
12.J. Sakhardande, P. Pattanayak and M. Bhowmick, "Smart
Cane Assisted Mobility for the Visually
Impaired", International Journal of Computer Electrical
Automation Control and Information Engineering, vol. 6, no.
10, pp. 1262-1265, 2012.
13.M. H. Wahab, A. A. Talib, H. A. Kadir, A. Johari, A.
Noraziah, R. M. Sidek, et al., "Smart Cane; Assistive Cane for
Visually-impaired People", IJCSI International Journal of
Computer ScienceIssues, vol. 8, no. 4, pp. 21-27, Jul. 2011
14.A. Agarwal, D. Kumar and A. Bhardwaj, "Ultrasonic Stick
for Blind", International Journal Of Engineering And
Computer Science, vol. 4, no. 4, pp. 11375-11378, Apr. 2015.