Object Detection System
Object Detection System
Object Detection System
Presented By:
1. Borole Abhilasha Rahul
2. Shirsath Harshada Pramod
3. Patil Aboli Suhas
4. Behere Nandini Dilip
Abstract:
Vision is one of the very essential human senses and it plays the
most important role in human perception about surrounding environment.
Hence, over thousands of papers have been published on these subjects that
propose a variety of computer vision products and services by developing
new electronic aids for the blind. This paper aims to introduce a proposed
system that restores a central function of the visual system which is the
identification of surrounding objects. This method is based on the local
features extraction concept. The simulation results using SFIT algorithm and
keypoints matching showed good accuracy for detecting objects. Thus, our
contribution is to present the idea of a visual substitution system based on
features extractions and matching to recognize and locate objects in images.
Vision is one of the very essential human
senses
and it plays the most important role in human
perception about
surrounding environment. Hence, over
thousands of papers
have been published on these subjects that
propose a variety of
computer vision products and services by
developing new
electronic aids for the blind. This paper aims
to introduce a
proposed system that restores a central
function of the visual
system which is the identification of
surrounding objects. This
method is based on the local features
extraction concept. The
simulation results using SFIT algorithm and
keypoints
matching showed good accuracy for detecting
objects. Thus,
our contribution is to present the idea of a
visual substitution
system based on features extractions and
matching to recognize
and locate objects in images.
Vision is one of the very essential human
senses
and it plays the most important role in human
perception about
surrounding environment. Hence, over
thousands of papers
have been published on these subjects that
propose a variety of
computer vision products and services by
developing new
electronic aids for the blind. This paper aims
to introduce a
proposed system that restores a central
function of the visual
system which is the identification of
surrounding objects. This
method is based on the local features
extraction concept. The
simulation results using SFIT algorithm and
keypoints
matching showed good accuracy for detecting
objects. Thus,
our contribution is to present the idea of a
visual substitution
system based on features extractions and
matching to recognize
and locate objects in images.
Introduction:
According to the World Health Organization,
there are
approximately 285 million people who are
visual impairments,
39 million of them are blind and 246 million
have a decrease of
Visual acuity. Almost 90% who are visually
impaired are
living in low-income countries. In this
context, Tunisia has
identified 30,000 people with visual
impairments; including
13.3% of them are blind.
These Visual impairment present severe
consequences on
certain capabilities related to visual function:
− The daily living activities (that require a
vision at a
medium distance)
− Communication, reading, writing (which
requires a
vision closely and average distance)
− Evaluation of space and the displacement
(which
require a vision far)
− The pursuit of an activity requiring
prolonged
maintenance of visual attention.
In the computer vision community,
developing visual aids for
handicapped persons is one of the most
active research
projects. Mobility aids are intended to
describe the
environment close to the person with an
appreciation of the
surrounding objects. These aids are essential
for fine navigation
in an environment described in a coordinate
system relative to
the user. In this paper, we present an
overview of vision
substitution modalities [1-12] and their
functionalities. Then,
we introduce our proposed system and the
experiments tests.
According to the World Health Organization,
there are
approximately 285 million people who are
visual impairments,
39 million of them are blind and 246 million
have a decrease of
Visual acuity. Almost 90% who are visually
impaired are
living in low-income countries. In this
context, Tunisia has
identified 30,000 people with visual
impairments; including
13.3% of them are blind.
These Visual impairment present severe
consequences on
certain capabilities related to visual function:
− The daily living activities (that require a
vision at a
medium distance)
− Communication, reading, writing (which
requires a
vision closely and average distance)
− Evaluation of space and the displacement
(which
require a vision far)
− The pursuit of an activity requiring
prolonged
maintenance of visual attention.
In the computer vision community,
developing visual aids for
handicapped persons is one of the most
active research
projects. Mobility aids are intended to
describe the
environment close to the person with an
appreciation of the
surrounding objects. These aids are essential
for fine navigation
in an environment described in a coordinate
system relative to
the user. In this paper, we present an
overview of vision
substitution modalities [1-12] and their
functionalities. Then,
we introduce our proposed system and the
experiments tests.
The simulation results using SFIT algorithm and keypoints matching showed
good accuracy for detecting objects. Thus, our contribution is to present the
idea of a visual substitution system based on features extractions and
matching to recognize and locate objects in images.
Literature Survey:
According toJuan and O. Gwon, aˆA Comparison of SIFT, PCASIFT
and SURFˆa.International Journal of Image Processing(IJIP), 3(4):143 aˆ
152, 2009. According to Hanen Jabnoun, Faouzi Benzarti , Hamid Amiri ,
Visual substitution system for blind people based on SIFT descriptiona,
International Conference of Soft Computing and Pattern Recognition 2014
IEEE. According to Hanen Jabnoun, FaouziBenzarti,andHamid Amiri,
Object recognition for blind people bsed on features extraction IEEE IPAS
a14:INTERNATIONAL IMAGE PROCESSING APPLICATIONS AND
SYSTEMS CONFERENCE 2014.
Overview:
Related works show that visual substitution devices accept
input from the user’s surroundings, decipher it to extract information
about entities in the user’s environment, and then transmit that
information to the subject via auditory or tactile means or some
combination of these two. Among the various technologies used for
blind people, the majority is aids of mobility and obstacle detection
[5, 8]. They are based on rules for converting images into data
sensory substitution tactile or auditory stimuli. These systems are
efficient for mobility and localization of objects which is sometimes
with a lower precision. However, one of the greatest difficulties of
blind people is the identification of their environment and its [6].
Indeed, they can only be used to recognize simple patterns and
cannot be used as tools of substitution in natural environments. Also,
they don’t identify objects (e.g. whether it is a table or chair) and
they have in some cases a late detection of small objects. In addition,
some of them seek additional auditory, others require a sufficiently
long period for learning and testing. Among the problems in object
identification, we note the redundancy of objects under different
conditions: the change of viewpoint, the change of illumination and
the change of size. We have the concept of intra-class variability
(e.g. there are many types of chairs) and the inter-class similarity
(e.g. television and computer). For this reason, we are interested in
the evaluation of an algorithm for fast and robust computer vision
application to recognize and locate objects in a video scene. Thus, it
is important to design a system based on the recognition and
detection of objects to meet the major challenges of the blind in
three main categories of needs: displacement, orientation and object
identification.
Ultrasonic Sensor:
Distance calculation
Distance L = 1/2 × T × C
Since ultrasonic waves can reflect off a glass or liquid surface and
return to the sensor head, even transparent targets can be detected.
[Resistant to mist and dirt] Detection is not affected by accumulation of
dust or dirt. [Complex shaped objects detectable] Presence detection is
stable even for targets such as mesh trays or springs.
The development of the radar technology took place during the World War
II in which it was used for detecting the approaching aircraft and then later
for many other purposes which finally led to the development of advanced
military radars being used these days. Military radars have a highly
specialized design to be highly mobile and easily transportable, by air as
well as ground. Military radar should be an early warning, altering along
with weapon control functions. It is specially designed to be highly mobile
and should be such that it can be deployed within minutes.
• Antenna acts as transmitter, sending narrow beam of radio waves through the air.
PROCEDURE
Components Required:
In this project we have used the arduino and ultrasonic sensor along with the
jumping wires and the relay motors and details list of the hard ware components
are
• Arduino board and arduino cable
• Jumper wires
xii
Bread board
Ultrasonic sensor
xiii
Relay motor
gum gun
LAPTOP
A. Connecting Ultrasonic Sensor:-
An Ultrasonic Sensor consists of three wires. One for Vcc, second for Ground
and the third for pulse signal. The ultrasonic sensor is mounted on the servo
motor and both of them further connected to the Arduino board. The ultrasonic
sensor uses the reflection principle for its working. When connected to the
Arduino, the Arduino provides the pulse signal to the ultrasonic sensor which
then sends the ultrasonic wave in forward direction. Hence, whenever there is
any obstacle detected or present in front, it reflects the waves which are
received by the ultrasonic sensor.
If detected, the signal is sent to the Arduino and hence to the PC/laptop to the
processing software that shows the presence of the obstacle on the rotating
RADAR screen with distance and the angle at which it has been detected.5
IDE software. Hence, we had to find a way to boot load the Arduino using the AVR
programmer. It took us a long time to make the AVR programmer by researching on the type
of communication and architecture of the AVR as it is not as same as a 8051 microcontroller.
A. Communicating with Arduino through PC
Another major problem related to the Arduino board was the communication with it from
PC. Since, there is a requirement of an RS-232 to TTL conversion for the communication, so
try some methods:
[1] Firstly I used the MAX-232 IC to communicate with the Arduino as with the 8051 but
due to large voltage drop and mismatch in the speed, it failed to communicate.
[2] Next, I tried to use a dedicated AVR as USB to Serial converter as in the original
Arduino board, the difference being DIP AVR used by us instead of the SMD Mega16U2
controller.
But, unfortunately I was unable to communicate through it.
[3] At last I had no other choice but to use the FTDI FT-232R chip for USB to Serial
conversion. Finally IT WORKED!!!
PRACTICAL IMPLEMENTATION
1. VCC
2. GND
3. PULSE
VII.
The Arduino boards are available readily in the electronics market, but we
decided to make our own Arduino board instead of buying one. So, the first
problem was where to start from to achieve this goal. Since, all parts on an
Arduino board are SMD’s, so we had to find a way to replace the SMD’s
with DIP IC’s and also had to make an AVR programmer in order to pursue
our further work. Hence, it took us some days to determine and plan our
course of action.
After that we had to boot load the AVR chip so as to make it compatible
with the Arduino IDE software. Hence, we had to find a way to boot load the
Arduino using the AVR programmer. It took us a long time to make the
AVR programmer by researching on the type of communication and
architecture of the AVR as it is not as same as a 8051 microcontroller.
C. Communicating with Arduino through PC
ARDUINO SOFTWARE
Arduino Code:
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
// Sets the trigPin on HIGH state for 10 micro seconds
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH); // Reads the echoPin, returns the sound
wave travel time in microseconds
distance= duration*0.034/2;
return distance;
}
Processing Code:
fill(98,245,31);
// simulating motion blur and slow fade of the moving line
noStroke();
fill(0,4);
rect(0, 0, width, height-height*0.065);
index1 = data.indexOf(","); // find the character ',' and puts it into the variable
"index1"
angle= data.substring(0, index1); // read the data from position "0" to position
of the variable index1 or thats the value of the angle the Arduino Board sent into
the Serial Port
distance= data.substring(index1+1, data.length()); // read the data from position
"index1" to the end of the data pr thats the value of the distance
pushMatrix();
if(iDistance>40) {
noObject = "Out of Range";
}
else {
noObject = "In Range";
}
fill(0,0,0);
noStroke();
rect(0, height-height*0.0648, width, height);
fill(98,245,31);
textSize(25);
text("10cm",width-width*0.3854,height-height*0.0833);
text("20cm",width-width*0.281,height-height*0.0833);
text("30cm",width-width*0.177,height-height*0.0833);
text("40cm",width-width*0.0729,height-height*0.0833);
textSize(40);
text("Indian Lifehacker ", width-width*0.875, height-height*0.0277);
text("Angle: " + iAngle +" °", width-width*0.48, height-height*0.0277);
text("Distance: ", width-width*0.26, height-height*0.0277);
if(iDistance<40) {
text(" " + iDistance +" cm", width-width*0.225, height-height*0.0277);
}
textSize(25);
fill(98,245,60);
translate((width-width*0.4994)+width/2*cos(radians(30)),(height-
height*0.0907)-width/2*sin(radians(30)));
rotate(-radians(-60));
text("30°",0,0);
resetMatrix();
translate((width-width*0.503)+width/2*cos(radians(60)),(height-
height*0.0888)-width/2*sin(radians(60)));
rotate(-radians(-30));
text("60°",0,0);
resetMatrix();
translate((width-width*0.507)+width/2*cos(radians(90)),(height-
height*0.0833)-width/2*sin(radians(90)));
rotate(radians(0));
text("90°",0,0);
resetMatrix();
translate(width-width*0.513+width/2*cos(radians(120)),(height-
height*0.07129)-width/2*sin(radians(120)));
rotate(radians(-30));
text("120°",0,0);
resetMatrix();
translate((width-width*0.5104)+width/2*cos(radians(150)),(height-
height*0.0574)-width/2*sin(radians(150)));
rotate(radians(-60));
text("150°",0,0);
popMatrix();
}
Output:
ADVANTAGES:-
CONCLUSIONS:-
to the Arduino UNO R3 board and the signal from the sensor
by the sensor.
REFERENCES
[1] http://www.arduino.cc/
[3] http://www.atmel.com/atmega328/
[4] http://en.wikipedia.org/wiki/File:16MHZ_Crystal.jpg
[5] http://www.google.co.in/imgres?imgurl=http://
www.electrosome.com/wp-
content/uploads/2012/06/ServoMotor.gif&imgrefurl=http://w
ww.electrosome.com/tag/se rvo
motor/&h=405&w=458&sz=67&tbnid=rcdlwDVt_x0DdM:&t
bnh=100&tbnw=113&zoo m=1
&usg=
6J2h0ZocdoSMrS1qgK1I2qpTQSI=&docid=lEfbDrEzDBfzbM
&sa=X&ei=a_
OKU vTbD8O5rgeYv4DoDQ&ved=0CDwQ9QE