Academia.eduAcademia.edu

Autonomous Self Driving Car

2019

The project aim is to build a prototype automatic car using raspberry pi as a processing chip. A Pi camera along with ultrasonic sensor is used to provide necessary data from the real world to the car. The car is capable of running safely and intelligently without any risk of human error. Algorithm such as line detection, obstacle detection is combined together to provide the necessary control to the car.

International Journal of Science and Research (IJSR) ISSN: 2319-7064 ResearchGate Impact Factor (2018): 0.28 | SJIF (2018): 7.426 Autonomous Self Driving Car Vikrant Thakur, Naveen Meena, Chetan Atrai, Amit Aryan, Mamta Tholia 1 Student, Electrical and Instrumentation Department, Sant Longowal Institute of Engineering and Technology, Sangrur-148106 Punjab, India 2 Student, Electrical and Electronics Department, Maharaja Surajmal Institute of Technology, C-4, Lal Sain Mandir Marg, Janak Puri, New Delhi- 110058 Delhi, India Abstract: The project aim is to build a prototype automatic car using raspberry pi as a processing chip. A Pi camera along with ultrasonic sensor is used to provide necessary data from the real world to the car. The car is capable of running safely and intelligently without any risk of human error. Algorithm such as line detection, obstacle detection is combined together to provide the necessary control to the car. Keywords: Raspberry Pi, Pi camera, Ultrasonic sensor, Motor driver, Motors 1. Introduction Traffic accidents have become one of the most serious problems in today's world. Roads are the mostly chosen modes of transportation and provide the finest connections among all modes. Most frequently occurring traffic problem is the negligence of the drivers and it has become more and more serious with the increase of vehicles. Increasing the safety and saving lives of human beings is one of the basic functions of smart cars. Smart cars are advanced system which aim to provide innovative services relating to different modes of transport and traffic management. This system enables various users to be better informed and make safer, more coordinated, and smarter use of transport networks. These road accidents can be reduced with the help of road lanes or white markers that assist the driver to identify the road area and non-road area. A lane is a part of the road marked which can be used by a single line of vehicles as to control and guide drivers so that the traffic conflicts can be reduced. The A and B use the same PCB, whilst the B+ and A+ are a new design but of very similar form factor. The Compute Module is an entirely different form factor and cannot be used standalone. In this project, we have used the Raspberry PI 3 model B. Figure 1 2.2.2 Pi Camera It is the camera shipped along with Raspberry Pi. Pi camera module is also available to which can be used to take highdefinition videos as well as still photographs. 2. Hardware Design 2.1 List of Hardware A pre-built four wheel drive (4WD) chassis is used as a base on which following hardware components are fit:        as part of the Compute Module development kit). All models use the same SoC (System on Chip - combined CPU & GPU), the BCM2835, but other hardware features differ. Raspberry Pi3 (MODEL B) for CPU computations. Motor driver IC L293D which can control four motors Jumper wires to connect individual components L shaped aluminium strip to support camera Pi camera Ultrasonic sensor to detect obstacles Chassis 2.2.3 Ultrasonic Sensors Ultrasonic sensors (also known as transceivers when they both send and receive, but more generally called transducers) evaluate attributes of a target by interpreting the echoes from radio or sound waves respectively. In this project, they are used to detect the distance of obstacles from the car. 2.2 Hardware and Software Description 2.2.1 Raspberry Pi The Raspberry Pi is a credit card-sized single-board computer. There are currently five Raspberry Pi models in market i.e. the Model B+, the Model A+, the Model B, the Model A, and the Compute Module (currently only available Figure 2 Volume 8 Issue 5, May 2019 www.ijsr.net Licensed Under Creative Commons Attribution CC BY Paper ID: ART20197435 10.21275/ART20197435 259 International Journal of Science and Research (IJSR) ISSN: 2319-7064 ResearchGate Impact Factor (2018): 0.28 | SJIF (2018): 7.426 2.2.4 Raspbian OS Of all the operating systems Arch, Risc OS, Plan 9 or Raspbian available for Raspberry Pi, Raspbian comes out on top as being the most user-friendly, best-looking, has the best range of default software and optimized for the Raspberry Pi hardware. Raspbian is a free operating system based on Debian (LINUX), which is available for free from the Raspberry Pi website. Pi . The rest of the space on the lower shelf is taken by 8 AA batteries which provide the power to run the motors. To control the motor connected to pin 3 (O1), pin 6 (O2), the pins used are pin 1, pin 2 and pin 7 which are connected to the GPIOs of Raspberry pi via jumper wires. 2.2.5 Python Python is a widely used general-purpose, high-level programming language [18, 20, and 21]. Its syntax allows the programmers to express concepts in fewer lines of code when compared with other languages like C, C++or java [20, 21]. 2.2.6 RPi.GPIO Python Library The RPi.GPIO Python library allows you to easily configure and read-write the input/output pins on the Pi’s GPIO header within a Python script [18, 20]. This package is not shipped along with Raspbian. Figure 3 2.2.7 OpenCV It (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. It has over 2500optimized algorithms, including both a set of classical algorithms and the state of the art algorithms in Computer Vision, which can be used for image processing, detection and face recognition, object identification, classification actions, traces, and other functions. This library allows these features be implemented on computers with relative ease, provide a simple computer vision infrastructure to prototype quickly sophisticated applications. The library is used extensively by companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota, and startups area as Applied Minds, Video Surf and Zeitera. It is also used by many research groups and government. It is based on C++ but wrappers are available in python as well. In our project is used to detect the roads and guide the car on unknown roads. 2.3 Hardware Components Connection The 4 wheels of the chassis are connected to 4 separate motors. The motor driver IC L293D is capable of driving 2 motors simultaneously. The rotation of the wheels is synchronized on the basis of the sides i.e. the left front and left back wheels rotate in sync and right front and right back wheels rotate in sync. Thus, the pair of motors on each side is given the same digital input from L293D at any moment. This helps the car in forward, backward movements when both side wheels rotate in same direction with same speed. The car turns when the left side wheels rotate in opposite direction to those in right. The chassis has two shelves over the wheels separated by 2 inches approx. The IC is fixed on the lower shelf with the help of two 0.5-inch screws. It is permanently connected to the motor wires and necessary jumper wires are drawn from L293D to connect to Raspberry Figure 4 The raspberry pi case is glued on the top shelf along with the L shaped aluminium strip. The pi is fit in the case and the aluminium strip gives the support to the camera fit on servo motor and the ultrasonic sensor [1, 18, 20]. The Wi-Fi dongle is attached to the USB port in Raspberry Pi in order to connect to it wirelessly. The complete connection of the raspberry pi with motor controller L293D can be found in fig 3[9, 22]. Since raspberry pi needed its own IP, it needs to be connected to a Wi-Fi router or Hotspot. For the same we need to make some changes in the field specified so as to make raspberry pi recognize the router every time it boots up. Navigate to the file “/etc/network/interfaces” and add following lines to make the PI connect with your router after reboot. 3. Lane Detection Lane detection is one significant method in the visualizationbased driver support structure and capable to be used for vehicle routing, cross power, crash avoidance, or lane departure warning system. Different road condition that create this difficulty more complex include dissimilar variety Volume 8 Issue 5, May 2019 www.ijsr.net Licensed Under Creative Commons Attribution CC BY Paper ID: ART20197435 10.21275/ART20197435 260 International Journal of Science and Research (IJSR) ISSN: 2319-7064 ResearchGate Impact Factor (2018): 0.28 | SJIF (2018): 7.426 of lanes (straight or rounded), occlusions cause by obstacle, fog, darkness, illumination change (like night-time), and so on. Therefore, it is the method to locate lane in the picture and is a significant enable or attractive skill in different automobile application, include lane departure recognition and warning, travel control, cross control, and self-directed driving. A lane departure warning system (LDWS) is a technology designed for warning a driver when the vehicle begins to depart from its lane. An effective lane detection [13] system will navigate autonomously or assist driver in all types of lanes like straight and curved, white and yellow, single and double, solid and broken and pavement or highway lane boundaries. The system should be able to detect lane even under noisy conditions such as fog, shadow, and stain. Figure 5 The ultrasonic sensor is mounted on a servo motor at the front of the chassis. The sensor rotates periodically and checks for the potentially threatening obstacles which may or may not be in the line of motion but may hit the car if precaution is not taken. Benefits of Lane Detection  Gives assistance and details to pedestrians and drivers  Uniformity of the markings is an important factor in minimizing confusion and uncertainty about their meaning  Allows vehicular drivers to drive safely 4. Project Phases 4.1 Phase I: Car with Autonomous Obstacle Avoidance In robotics, obstacle avoidance is the task of satisfying the control objective subject to non-intersection or non-collision position constraints. Normally obstacle avoidance involves the pre-computation of an obstacle-free path along which the controller will then guide a robot. Though inverse perspective mapping helps to find the distance of the objects far away from the car with the help of known camera parameters and generating a model but it takes more computations. Using ultrasonic sensor is better option in this case as it doesn’t require high CPU computations and detects the obstacles as well as help us finding the distance. Ultrasonic sensors used to detect the distance of nearby flat objects so as to avoid obstacles in general. This is a very low power device and has a very extensive use in mini autonomous robots and cars. The working can be explained as transmitting a low frequency sound from the sensor to the object which after reflection is received by the receiver of the sensor. Depending on the time taken to receive the reflected signal, the distance of the nearby vehicle or any other obstacles detected. One demerit of this approach is if the reflecting surface is at certain angle with the sensor the distance measure may be ambiguous and had to be supported with other techniques like OpenCV and image processing before making any decision about the turn. 2.1Algorithm Watch the surrounding after a fixed interval of time i.e. 300ms. The following steps are repeated every interval. 1) Watch the surroundings to calculate the distance of the obstacles from the car. 2) The minimum threshold distance that is safe for the car is 1 metre. If the distance calculated comes out to be lesser than threshold, stop the car and check other sides. 3) Rotate the car and move ahead. 4.2 Phase II: Car with Autonomous Navigation. This phase is about making the car autonomous i.e. the car determines the road by itself and finds it line of motion. Following techniques is performed step-by-step to achieve autonomous behaviour in the car. 4.2.1 Algorithm Extracting the color range for the roads in HSV color-space requires manual data collection in the beginning. Same portion of the road is recorded at different times of the day and in different weather conditions. Considering the changes in variation of the color and the intensity of light, a specific set of upper and lower bounds for the HSV values is generated . 1) Define the data structures required for the algorithm. i.e. a 2D array of Scalar which contains the set of possible upper and lower threshold color values, and tolerance for both left and right side with default value as 3. 2) Select the threshold values as per the conditions and convert the image frame into a binary image. 3) Reduce the region of interest from full frame to a trapezium with its base touching the bottom. The region of interest is divided into 3 parts of height 50%, 30% and 20% starting from the top and moving towards bottom. Steps 4 to 11 are applied for all three parts of the region of interest. 4) Find the contours in the region of interest. 5) Determine the largest contour containing the horizontally central part of bottom of the chosen section in the region of interest. Volume 8 Issue 5, May 2019 www.ijsr.net Licensed Under Creative Commons Attribution CC BY Paper ID: ART20197435 10.21275/ART20197435 261 International Journal of Science and Research (IJSR) ISSN: 2319-7064 ResearchGate Impact Factor (2018): 0.28 | SJIF (2018): 7.426 6) Approximate the contour into a simpler polygonal curve. This curve (basically a contour) represents the potential road. 7) Find the Hough lines on the probable road curve. 8) The lines with negative slope (lines going from right to left as we move upward) present in the left half and the ones with positive slope (lines going from left to right as we move upward) present in right half are ignored. 9) The lines with the least positive x-intercept or y intercept on both left and right halves are selected as left and right edge lines respectively. 10) If no such left or right edge line is obtained then the corresponding tolerance is decremented by 1. 11) If both left and right tolerance has become less than 1, we need to change the threshold values for road color in HSV color-space. 12) If the three lines representing the edges in three parts of region of interest are along the same straight line, (check this using the angles they are making) the road is concluded straight. If we get the line in the middle and the bottom part, then the road is turning. On encountering a turn, region of interest is modified accordingly. 13) A line dividing the region between the determined left and right edges in two equal halves vertically gives the line of motion for the car. 14) While following the movement line defined, use the ultrasonic sensor to detect the obstacles. 15) Using the following algorithm and obstacle avoidance from phase II is combined together to implement a final car which is fully autonomous. Volume 8 Issue 5, May 2019 www.ijsr.net Licensed Under Creative Commons Attribution CC BY Paper ID: ART20197435 10.21275/ART20197435 262