Method of Analysis: Computer Vision

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Method of Analysis

There are several systems that work in conjunction with each other to control our
autonomous car. This car has following core components:

1. Computer Vision.

2. Sensor Fusion.

3. Control.

Computer Vision:

Computer vision is like using cameras to visualize the road. Humans can observe
a road and can handle a car with basically just two eyes and a brain. But an autonomous can use
camera images to find lane lines, or track other vehicles on the road. Using image captured from
camera car will be able to detect road lanes and depending upon the shape of road lane it will
be able to take decisions. Lane detection through computer vision involves the following steps.

Figure 1: Block Diagram of proposed method


1. Input Frame & Smoothing Function:

First of all the frame captured from camera is passed through a smoothing filter. We should
make the edges smoother because images may have many rough edges which cause many
noisy edges to be detected and we require only lane edges to be detected. We will use Gaussian
Blur for this purpose.

2. ROI Selection:

Lane lines in image appear to be converging even though they are parallel. It becomes easier to
detect lane lines when this aspect is eliminated. This can be achieved by taking bird’s eye view of
image after which lane lines appear to be parallel to each other. So we will select four points as
our region of interest and then we will apply perspective transform to obtain a bird’s eye view of
image.

3. Binary Image of Lane

After having a bird’s eye view of image we have to determine the pixel coordinates that are
actually the candidates of lane pixels. We will apply color thresh holding and edge thresh
holding techniques to determine the lane pixel coordinates.

Color Based Detection:

Road lanes are usually yellow and white in color. Color detection techniques involve the
extraction of yellow and white color pixels from the image. Several color spaces i.e. LAB, HLS,
HSV etc will be used here to extract lane pixels. This may result in the detection of some pixels
that are not the part of lane. Such pixels will be removed on the basis of road features detection.

Edge Based Detection:

In an image edge can be defined as a set of adjoining pixel positions where an abrupt change of
intensity (gray or color) values occur. Edges represent boundaries between objects and
background. As the lane lines posse’s edge so it can be detected using edge detection
techniques such as canny, sobel and laplacian edge detection etc. Later work will be done to
remove the unwanted edges.

The result of both the color based and edge based lane detection will be a binary image with
lane pixels set to 1 and remaining pixels set to 0. The result of both techniques will be combined
to get a single binary image.
4. 2nd Order Polynomial Fit:

After having the binary image the next step would be to determine the mathematical model of
left and right lane. Sliding technique will be used here. A sliding window is a rectangular region
of fixed width and height that “slides” across an image to find out if the window has an object of
interest. Image histogram will be used to determine the initial position of the sliding window.
The window will then slide across the image vertically in search of lane pixels. After having the
coordinates of lane pixels a second order function of the following form will be determined and
fitted on the image.

(a) Original Image (b) Bird's Eye View (c) Binary Image

(d) Sliding Window Technique (e) 2nd Order Polynomial Fit (f) Image with Lane marked

Fig1: Lane Detection


Sensor Fusion:

Sensor fusion is to build a comprehensive understanding of car’s environment. Self-driving cars


have many different sensors. A typical self-driving car sensor suite includes cameras, radar,
lasers, lidars and ultrasonic sensors. Our self driving car will be equipped with ultrasonic sensors
and camera to have an understanding of the environment. Ultrasonic sensors will be used for
obstacle detection. The advantage of ultrasonic sensors is that they are quite economical. Most
ultrasonic sensors have a sensing distance in the range of 10m. This makes them quite useful for
obstacle detection that is in close vicinity to the car. Cameras will be used for the detection of
other vehicles, pedestrians, road signs and traffics signals etc.

Control:
Control is the final step. Once we have the information of the lane and sensor data the vehicle
needs to turn the steering wheel and hit the throttle or the brake. The decisions from lane
information will be based upon the curvature of left and right lane. The curvature of lane can be
determined using the formulae of radius of a curve.

K = 1/R
where K = Curvature of Lane.

All the decisions of the car including steering angle and speed control will be based upon the
value of K and R. However, to apply brakes in case when car confronts any obstacle it will take
help from the information of sensors data.

You might also like