International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-8 Issue-4, November 2019
Vision Controlled Automated Robotic Vehicle
using Raspberry Pi
Aswin Kumer S V, Pamarthi Kanakaraja, Kuruvella Naga Arun Sai Krishna, Macharla Devisri,
Parvataneni Tulasi
Abstract: The automobile industries are concentrating to
develop the design for self-driving cars. Nowadays they are many
possibilities to implement the automated vehicle, but the
drawbacks for implementing are also very high. In this paper, the
miniature model of self-driving robot is created and demonstrated
using the Raspberry pi with supporting sensors and motor drivers.
So, this was mainly because of the security concerns that have
raised in the initial testing stages. So, this paper could best
describe an application that deals with the safety measures of the
autonomous vehicles that are going to be dealt with in the nearer
future. This paper tells us about how an application can be
implemented using Raspberry Pi, camera module and the
ultrasonic Sensor. Considering the different features and the cost,
on a small scale a two-wheel vehicular robotic prototype has been
designed. In the Autonomous car Raspberry pi is the central
processor. Different type of images are captured by the camera
module, and if these images have captured the color of traffic
lights, then if the captured image is of the Red light then the
motors of the vehicle should stop such that breaks of the car in
real world should work. If the captured image is of Green light
then the motors of the car should run and the vehicle should start
to move in the direction it want to move and also using the
Ultrasonic sensor if any of the objects that are nearby to the
vehicle, then the vehicle should change the direction from which it
is moving and this is well described throughout the paper.
Index Terms: Raspberry pi, Ultrasonic sensor, Web Camera
I. INTRODUCTION
Vehicles are an important part of our daily life and raising
day by day. The scenario of increased vehicle density in India
between 2001 and 2019 is as high that most of the accidents
arises from higher traffic size and over speed driving. This is
because most of the drivers are not following the speed limits
in the particular areas and this leads to the increase in the
number of accidents per year in the country. Globally this has
been an issue in the automobile industry to provide the safety
and security for the people who are using the vehicles in the
Revised Manuscript Received on November 15, 2019
Aswin Kumer S V, Department of Electronics and Communication
Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram,
A.P., India.
Pamarthi
Kanakaraja,
Department
of
Electronics
and
Communication Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, A.P., India.
Kuruvella Naga Arun Sai Krishna, Department of Electronics and
Communication Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, A.P., India.
Macharla Devisri, Department of Electronics and Communication
Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram,
A.P., India.
Parvataneni Tulasi, Department of Electronics and Communication
Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram,
A.P., India.
Retrieval Number: D8863118419/2019©BEIESP
DOI:10.35940/ijrte.D8863.118419
day to day life [1]. Sadly, even the public transportation is in
that state where many accidents are caused due to the usage of
old vehicles that do not work properly at the emergency
situations. So, these problems have given rise to the new era
of the Autonomous cars such that these cars are the machines
that are also called “unmanned cars” [2]. In the recent years
there has been much research to develop the autonomous cars
to bring a new edition of the automobile vehicles that can
drive on roads by themselves. These autonomous cars are
capable of making decisions about the desired environment
through programming and that is developed to enhance the
safety and security of the people and also reduce the damage
caused to the vehicles[1].
II. RELATED WORK
The autonomous cars have been the advancements to the
technology in present days. So, some of the recent developed
prototypes that made us to work on the technology were
named such as Vision based Deep Learning methodology for
Self-Driving Cars [1][2]. This paper proposes the
development of an agent that can guide the automotive cars
like humans do. There were many papers that expresses their
views in the invent of these types of Self-Driving Cars[1].
Some of them deal with the Navigation systems that are to be
included in the Self-driving cars with their different
mechanisms that are to be applied in these vehicles[4]. Many
technologies that are used to develop these vehicles are the
mechanisms that are already in use such as Fuzzy Logic and
the Sensors and the IOT technology that has grown high in the
recent technologies.
III. PROPOSED METHODOLOGY
The system consists of an ultrasonic sensor, camera,
Raspberry pi board and the Open-CV software. The detection
of the light is done by camera and the ultrasonic sensor
measures the distance from the object/Obstacle. The camera
is used to detect the color of the light. The ultrasonic sensor
and the camera, both are interfaced with the raspberry pi
board and processed through the Open-CV software.
A. Raspberry Pi 4 B+
Raspberry Pi four Model B is that the latest product within the
style Raspberry Pi vary of computers. It offers revolutionary
possibilities to increase processor speed, multimedia system
performance, memory, and property compared to the
Raspberry Pi 3Model B+ prior-generation, while maintaining
backward compatibility and similar power consumption.
For the top user, Raspberry Pi 4 Model B provides desktop
performance comparable to
entry-level x86 PC systems.
Main options for this device
include a superior 64-bit
5539
Published By:
Blue Eyes Intelligence Engineering
& Sciences Publication
Vision Controlled Automated Robotic Vehicle using Raspberry Pi
quad-core processor, dual-display support for resolutions up
to 4 K through a pair of micro-HDMI ports, hardware video
encoding up to 4Kbps, up to 4 GB RAM, dual-band 2.4/5.0
GHz wireless LAN, Bluetooth 5.0, Gigabit Ethernet, USB
3.0, and PoE compatibility (via a separate PoE HAT
add-on)[1].The dual-band wireless native space network and
Bluetooth have commonplace compliance certification,
permitting the board to be designed into conclusion with
considerably reduced compliance testing, rising each worth
and time to promote.
Fig. 3: The HC-SR04 Ultrasonic Sensor
Fig. 1: Raspberry Pi 4B+
Fig. 4: Input trig and output echo of Ultrasonic Sensor
B. C310 HD Web Camera:
For bright and crystal-clear image view, the Logitech C310
HD Webcam features 1280x 720 pixel resolution. The
webcam from Logitech allows you to capture 720p HD video
and upload it with a single click on social networking sites.
The Logitech Vid HD makes video calling easy and fast. It is
used
to
capture
and
process
the
light
totheprocessor.The camera uses 3.3V of supply to allow the
board to use the remaining power. The power supply require
s 1A for inputs and outputs[4].
Fig. 5: Basic principle of Ultrasonic censor
IV. BLOCK DIAGRAM
Fig. 2: C310 HD Web Camera
C. Ultrasonic Sensors:
Ultrasonic detectors measure distance using the properties of
sound waves. The sensor is used to measure the object's range
of information. The horizontal and vertical distance are
indicated by the camera for a fixed field, so that the sensor
detects the exact distance of the object within the range of the
sensors [3] [7] [5]. Angle of sight is used to measure the
distance
of
the
barrier.The main task of avoiding obstacles is to control the
vehicle in a state of noncollision and drive in an obstaclefree
path. The distance of the target is also found through the pro
cess of mapping. This is a very small machine with high sens
itivity and low power consumption.
The detector operates by transmitting and receiving through
this system by the sound waves [3]. The detector can emit
frequency waves. Such frequencies are produced back in the
form of an echo when the object comes forward[8]. The time
between the sound waves and the received echo is
determined[7]. One of the de-merits of this method is that the
reflecting surface is not adequately formed, or dis-merited.
Retrieval Number: D8863118419/2019©BEIESP
DOI:10.35940/ijrte.D8863.118419
Fig. 6: Fundamental steps of the proposed methodology
V. IMPLEMENTATION
A. Obstacle Detection:
This module is used to measure obstacle distance between
automated vehicle and obstacle using the basic censor module
the HC-SR04 Ultrasonic Sensor module. The input TRIG Pin
is kept HIGH for some duration let it be 10µS, to transmit the
40KHz ultrasonic frequency [3][4]. The echo pin of
Ultrasonic censor, transmits consecutive 8 wave pulses of
ultrasonic wave at frequency of 40Hz. When we transmit
these pulses the state of echo pin in the sensor changes to high
state [6]. It remains high until the wave comes back to the
censor. when the echo pin stays high we can calculate the
obstacle distance between the automated vehicle and the
obstacle which is based on time taken for wave to come back
to the automated vehicle. The
distance can be measured by
using the below formula.
5540
Published By:
Blue Eyes Intelligence Engineering
& Sciences Publication
International Journal of Recent Technology and Engineering (IJRTE)
ISSN: 2277-3878, Volume-8 Issue-4, November 2019
Distance= Velocity of sound/(time/2)
B. Speed Control based on color detection using USB
Camera
In this a camera can be attached to the Raspberry Pi port. We
can find out whether the motor should run or not depending on
the images captured by the camera. Each captured image
using the USB camera connected to the Raspberry Pi will be
processed to determine whether its color is green or red. If the
dominant color is green then the GPIO output pin state will be
made as high to move the vehicle by running its motor. Just
like traffic signal sign, if the color is red to stop the vehicle the
GPIO output pin will be set to low. To determine the color in
the captured images and to change the state of GPIO output
pin python code will be helpful.
Fig. 8: Image captured by camera
VI. RESULTS AND DISCUSSIONS
This project deals with automatic speed control of vehicle and
obstacle detection which helps to control and reduces the
accidents due to overspeed driving. Autonomous cars will
improve road safety, fuel efficiency, increase productivity and
understandability. This is the advanced obstacle detection
algorithm it can avoid the obstacle and go through another
way. This module will process the data which it streams by
capturing screenshots using raspberry pi camera module.
Using Open CV by adopting python, understanding of the
image by identifying objects will be better.
The captured image is sensed by the raspberry pi with the
code dumped into it and according to it the color is recognized
and the motors run according to it.
(A)
Fig. 7: Working model of Ultrasonic censor.
The Ultrasonic sensor gives us time taken by the waves to
return back to it after hitting the obstacle [3][7]. If sensor and
obstacle are not exactly aligned in perpendicular direction
then the wave is reflected at some angle which results in wave
reflection onto the another object before returning to the
automated vehicle, which results in the wrong calculations of
measuring distance between the automated vehicle and
obstacles. These measurements are higher than the distance of
obstacle. For the calculation of 50cm case, we get readings
using censor with the loaded automated vehicle which is more
than two or three times of actual distance calculation [5].
These calculated measurements are ignored when we execute
the model, However It is not possible to calculate the distance
accurately so we cannot predict the performance of automated
vehicle using this calculations.
(B)
Fig. 9: (A)&(B) The working model.
Table I: Distance measurements by automated vehicle.
Test Name
Test01
Programming
Language
Python
standard
Mean
50.30
49.83
Standard.Deviatio
n value
0.259
3.407
Test02
Python
18.16
18.25
0.055
0.2
5.666
5.7
0.18
0.1866
Test03
Python
Retrieval Number: D8863118419/2019©BEIESP
DOI:10.35940/ijrte.D8863.118419
Fig. 10: Overall Implementation
5541
Published By:
Blue Eyes Intelligence Engineering
& Sciences Publication
Vision Controlled Automated Robotic Vehicle using Raspberry Pi
on liquid crystals", Rasayan Journal of Chemistry, vol. 10, no. 1, pp.
16-24.
VII. CONCLUSION
The Raspberry pi used in this implementation operating under
Hyper Threading (HT) Technology, which is simultaneously
checking the ultrasonic sensor data and act accordingly to
avoid obstacles, as well as it is sensing the colors to displace
the vehicle. Finally, the autonomous vehicle based on the
color identification is designed and implemented using
Raspberry Pi Model-4 with real time operating system.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
AnjaliHemant, TipleHemant, GuraSagar, “Prototype of Autonomous
Car Using
Raspberry pi”, International Journal of Engineering
Research in Electronics and Communication Engineering (IJERECE),
Vol 5, Issue 4, April 2018.
R.D.Thombare, P.M.Sawant, P. Sawant,A. Sawant, V. P. Naik,
Automatic Speed Control of Vehicle using Video Processing,
Proceedings of the 2nd Internal Conference onInventive
Communication andComputational Technologies (ICICCT 2018).
D.S.Vidhya, DeliciaPerlinRebelo, Cecilia Jane D’Silva, Linford
William Fernandes, Obstacle Detection using Ultrasonic Sensor,
“International Journal of Engineering Research in Electronics and
Communication Engineering” (IJERECE), Vol 2, Issue 11, April 2016.
FayazShahdib, Md. WaliUllahBhuiyan, Md. Kamrul Hasan, Hasan
Mahmud, (2013)" Obstacle Detection and Object Size Measurement
for Autonomous Mobile Robot using Sensor", “International Journal of
Computer Applications”
Stewart Watkiss, Design and build a Raspberry Pi robot [Online],
available at: http://www.penguintutor.com/electronics/robot/rubyrobot
-detailedguide.pdf
Li, M., Zhao, C., Hou, Y. & Ren, M. ,” A New Lane Line Segmentation
and Detection Method based on Inverse Perspective Mapping,
International Journal of Digital Content Technology and its
Applications”. Volume 5, Number 4, April 2011, pp. 230-236
Interfacing hc-sr04 ultrasonic sensor with raspberry pi[internet]. [cited
2016
November
14].
Available
from:https://electrosome.com/hc-sr04-ultrasonic-sensor-raspberry-pi
K.Ramesh, S.V.Aswin Kumer “Efficient Health Monitoring System
Using Sensor Networks”, International Journal of Scientific &
Engineering Research Volume 3, Issue 6, June-2012, ISSN 2229-5518.
Inthiyaz, S., Madhav, B.T.P. & Kishore, P.V.V. 2018, "Flower image
segmentation with PCA fused coloured covariance and Gabor texture
features based level sets", Ain Shams Engineering Journal, vol. 9, no.
4, pp. 3277-3291.
Inthiyaz, S., Madhav, B.T.P. & Madhav, P.V.V. 2017, "Flower
segmentation with level sets evolution controlled by colour, texture and
shape features", Cogent Engineering, vol. 4, no. 1.
Inthiyaz, S., Madhav, B.T.P., Kishore Kumar, P.V.V., Vamsi
Krishna, M., Sri Sai Ram Kumar, M., Srikanth, K. & Arun Teja, B.
2016, "Flower image segmentation: A comparison between watershed,
marker controlled watershed, and watershed edge wavelet
fusion", ARPN Journal of Engineering and Applied Sciences, vol. 11,
no. 15, pp. 9382-9387.
Katta, S., Siva Ganga Prasad, M. & Madhav, B.T.P. 2018, "Teaching
learning-based algorithm for calculating optimal values of sensing
error probability, throughput and blocking probability in cognitive
radio", International Journal of Engineering and Technology
(UAE), vol. 7, no. 2, pp. 52-55.
Tripathi, D.P., Pardhasaradhi, P. & Madhav, B.T.P. 2018, "Statistical
parameters-based image enhancement techniques in pure and Nano
dispersed
6O.O8
liquid
crystalline
compounds", Phase
Transitions, vol. 91, no. 8, pp. 821-832.
Madhav, B.T.P., Pardhasaradhi, P., Kishore, P.V.V., Manepalli,
R.K.N.R. & Pisipati, V.G.K.M. 2016, "Image enhancement of
nano-dispersed N-(p-n-decyloxybenzylidene)-p-n-hexyloxy aniline
using combined unsharp masking", Liquid Crystals Today, vol. 25, no.
4, pp. 74-80.
Rambabu, M., Prasad, K.R.S., Madhav, B.T.P., Venu Gopalarao, M.,
Pardhasaradhi, P. & Pisipati, V.G.K.M. 2016, "Determination of phase
transitions in hydrogen bonded complexes (NOBA: PFOA) using
texural image processing techniques", ARPN Journal of Engineering
and Applied Sciences, vol. 11, no. 1, pp. 520-527.
Sivaram, K., Rao, M.C., Giridhar, G., Tejaswi, M., Madhav, B.T.P.,
Pisipati, V.G.K.M. & Manepalli, R.K.N.R. 2017, "Synthesis and
characterization of thiol - Capped silver nanoparticles and their effect
Retrieval Number: D8863118419/2019©BEIESP
DOI:10.35940/ijrte.D8863.118419
AUTHORS PROFILE
Dr.Aswin Kumer S V graduated in Electronics and
Communication Engineering from Pallavan College of
Engineering, Kanchipuram in April 2008 and received
his Master's degree in Embedded System Technology
SRM University, Kanchipuram in May 2012. He
received his doctoral degree for the implementation of
image fusion using Artificial Neural Network from SCSVMV (Deemed to be
University), Enathur in February 2019. He is working as an Associate
Professor in Department of Electronics and Communication Engineering at
KLEF (Deemed to be University), Guntur. He has more than 11 years of
teaching experience. His areas of interest are Digital Communication and
Digital Signal Processing.
Pamarthi Kanakaraja is currently working as an
Assistant Professor in KLEF (Deemed to be
University). He has 8 years of working experience On
Embedded Designing & Programming Concepts. He
is a Technical EMBEDDED DESIGNING concepts
Adviser for many Engineering and Polytechnic
(DIPLOMA) Students. He also published papers in various international
journals. He is a Regular Contributor in EFY (Electronics for You)
International Technical magazine. His area of research is Embedded in
Designing, Internet of Things (IoT) & Artificial Intelligence (AI).He is now
doing research in radio frequency and microwave engineering .He has done
many projects based on IOT and embedded systems.
Kuruvella Naga Arun Sai Krishna, Under graduate
Student,
Department
of
Electronics
and
Communication Engineering, Koneru Lakshmaiah
Education Foundation(Deemed to Be University),
Vaddeswaram, A.P., India. His area of Interests are
VLSI technology , Machine Learning, Embedded in
Designing and (IoT )Internet of Things. He has done some projects in digital
systems and digital communications. One of the major project he has done is
Automatic Irrigation System on Sensing Soil Moisture Content which is very
useful in real time farming applications. He has done certification in
Introduction to FPGA Design for Embedded Systems offered by coursera.
He has done various academic projects related to the embedded systems and
digital electronics.
Macharla Devisri, Department of Electronics and
Communication Engineering , Koneru Lakshmaiah
Education Foundation, Vaddeswaram, A.P., India. Area
of Interests in VLSI, Machine Learning, Embedded in
Designing (IoT)Internet of Things. I have done a few
hardware projects in my core domain and has basic programming knowledge
in VLSI. My passion towards doing research in Machine Learning in
extreme level. I want to take my research in a way where it helps people in all
ways. She has done a few projects in Digital system design, Embedded
system, CMOS VLSI technology which are real time based. One of the
project is Arduino based heart rate monitoring system using heart beat
sensor.
Parvataneni Tulasi, Under graduate Student,
Department of Electronics and Communication
Engineering,
Koneru
Lakshmaiah
Education
Foundation(Deemed to Be University), Vaddeswaram,
A.P., India. Area of Interests in VLSI, Machine
Learning, Embedded in Designing (IoT) Internet of
Things. Also, done some academic projects in the core competencies
required in the VLSI domain and has the knowledge in the basic VLSI
programming. Basic interests to do a research in Robotics and make sure
that the technology helps people in all the possible ways. In this regard to
know more about the design of basic Line following Robotic vehicle. Also
done some of the academic projects using Embedded Systems that can be
used in real time analysis such as weather monitoring system etc.
5542
Published By:
Blue Eyes Intelligence Engineering
& Sciences Publication