Shambhavi Report
Shambhavi Report
Shambhavi Report
SUBMITTED BY
SHEETAL PHULARI
(3GN20IS042)
Ms. SANGEETHA
M/S CONTRIVER®
# 609, 1st Floor, Sinchana Clinic, Panchamantra Rd, Kuvempu Nagara,
Mysuru, Karnataka 570023
2023 - 2024
CONTRIVER®
# 609, 1st Floor, Sinchana Clinic, Panchamantra Rd, Kuvempu Nagara,
Mysuru, Karnataka 570023
TRAINING CERTIFICATE
This is to certify that Ms.SHEETAL PHULARI (3GN20IS042). bonafide students of College of
Degree in partial fulfillment for the award of “Training Certificate” in Department of
Information science and engineering of the CONTRIVER, Mysore during the year 2023-2024. It is
certified that he/she as undergone internship during the time period from 16/08/2023 to
30/09/2023 of all working days corrections/suggestions indicated for internal validation have been
incorporated in the report deposited to the guide and trainer. The training report has been
approved as it satisfies the organizational requirements in respect of Internship training prescribed
for the said qualification.
Date:
Place: Mysore
- SHEETAL PHULARI
RESUME
SHEETAL PHULARI
INFORMATION SCIENCE AND ENGINEERING
CONTACT INFORMATION
ADDRESS:
D/O S h i v k u m a r P h u l a r i ,
KIADB Housing Area,Beside BSNL Tower,
Naubad Bidar,Karnataka-585402
EMAIL
ID:sheetalphulari07@gmail.com
CONTACT NO:+91 7892958001
OBJECTIVE
To work in a very challenging and competitive job environment with an appraisal and growth
combination, where I would be able to significantly contribute to the organization’s requirements
while continuously enhancing my skill-set.
ACADEMIC INFORMATION
EDUCATION QUALIFICATIONS:
INTERNSHIP Contriver
COMPUTER SKILLS
Packages : MS Office, MS powerpoint, MS excel
Programming Languages : C, Python,C++,HTML,CSS
PROJECT DETAILS
PROJECT: Road Lane Line Detection
Abstract: Road lane detection is a multifeature detection problem that has become a real challenge for
computer vision and machine learning techniques.
PERSONAL STRENGTH
PERSONAL PROFILE
DOB : 07-01-2003
Nationality : Indian
DECLARATION
I hereby declare that all the information’s are correct and true to the best of my knowledge and belief.
DATE:
Place: Mysuru
Yours Sincerely
(SHEETAL PHULARI)
CONTENTS
CHAPTER 1
INTRODUCTION........................................................................................ 2-10
1.1 INTRODUCTION.........................................................................................2
1.2 MOTIVATION............................................................................................3
1.3 DIGITAL IMAGE PROCESSING.............................................................. 4
1.3.1 STEPS IN IMAGE PROCESSING......................................................5
1.3.2 IMAGE IN MATRIX REPRESENTATION...................................... 6
1.3.3 TYPES OF AN IMAGE...................................................................... 6
1.4 SELF DRIVING CARS................................................................................7
1.5 LANE DETECTION BY VEHICLE............................................................ 8
1.5.1 DISTORTION CORRECTION........................................................... 8
1.5.2 CREATE A BINARY IMAGE...........................................................9
1.6 PROBLEM STATEMENT........................................................................10
CHAPTER 2
LITERATURE SURVEY.......................................................................... 11-26
2.1 EDGE DETECTION................................................................................. 11
2.2 FILTER OUT NOISE............................................................................... 11
2.2.1 CONVOLUTION.............................................................................. 11
2.2.2 CONVOLUTION OPERATION.......................................................12
2.3 SAMPLE I/P O/P OF CANNY EDGE DETECTION............................... 17
2.4 HOUGH TRANSFORM SPACE.............................................................. 18
2.4.1 THEORY OF HOUGH TRANSFORM SPACE.............................. 19
2.5 IMPLEMENTATION OF HOUGH TRANSFORM SPACE...................20
2.6 VARIATIONS AND EXTENSIONS....................................................... 21
2.6.1 GRADIENT DIRECTION................................................................. 21
2.6.2 KERNEL BASED HOUGH TRANSFORM..................................... 21
2.6.3 3-D KERNEL BASED HOUGH TRANSFORM.............................. 21
2.7 HOUGH TRANSFORM OF CURVE AND ITS GENERALIZATION.... 22
2.8 CIRCLE DETECTION PROCESS............................................................23
2.9 DETECTION OF 3-D OBJECTS.............................................................. 23
2.10 USING WEIGHTED FEATURES.......................................................... 24
2.11 CAREFULLY CHOSEN PARAMETER SPACE.................................. 24
2.12 EFFICIENT ELLIPSE DETECTION ALGORITHM............................. 25
2.13 EXISTING SYSTEMS.............................................................................25
2.14 LIMITATIONS OF EXISTING SYSTEMS.......................................... 25
2.15 PROPOSED SYSTEM.............................................................................26
CHAPTER 3
SYSTEM REQUIREMENT.................................................................... 27-29
3.1 SYSTEM CONFIGURATION................................................................. 27
3.1.1 SOFTWARE CONFIGURATION...................................................27
3.1.2 HARDWARE COFIGURATION....................................................29
CHAPTER 4
SYSTEM DESIGN...................................................................................30-36
4.1 PROPOSED SYSTEM OVERVIEW...................................................... 30
4.1.1 IMAGE PROCESSING METHODOLOGY.................................. 31
4.2 ARCHITECTURE...................................................................................33
4.3 CANNY EDGE DETECTION ARCHITECTURE................................. 35
4.4 PROJECT MODULES.............................................................................36
CHAPTER 5
IMPLEMENTATION............................................................................. 41-52
5.1 PSEUDO CODE........................................................................................41
CHAPTER 6
RESULTS AND DISCUSSIONS.............................................................53-58
CONCLUSION...............................................................................................59
FUTURE SCOPE............................................................................................59
BIBLIOGRAPHY...........................................................................................60
LIST OF FIGURES
Figure 6.1 Solid white curve..........................................................................................................53
Figure 6.2 Solid white curve annotated........................................................................................53
Figure 6.3 Solid white right...........................................................................................................54
Figure 6.4 Solid white right annotated..........................................................................................54
Figure 6.5 Solid yellow curve........................................................................................................55
Figure 6.6 Solid yellow curve annotated......................................................................................55
Figure 6.7 Solid yellow curve 2....................................................................................................56
Figure 6.8 Solid yellow curve 2 annotated...................................................................................56
Figure 6.9 Solid yellow left...........................................................................................................57
Figure 6.10 Solid yellow left annotated........................................................................................57
Figure 6.11 White carlane switch..................................................................................................58
Figure 6.12 White carlane switch annotated.................................................................................58
Road Lane Line Detection 2023-24
ABSTRACT
Lane detection is a challenging problem. It has attracted the attention of the computer vision
community for several decades. Essentially, lane detection is a multifeature detection problem that
has become a real challenge for computer vision and machine learning techniques. Although many
machine learning methods are used for lane detection, they are mainly used for classification rather
than feature design. But modern machine learning methods can be used to identify the features that
are rich in recognition and have achieved success in feature detection tests. However, these methods
have not been fully implemented in the efficiency and accuracy of lane detection. In this paper, we
propose a new method to solve it. We introduce a new method of preprocessing and ROI selection.
The main goal is to use the HSV color transformation to extract the white features and add preliminary
edge feature detection in the preprocessing stage and then select ROI on the basis of the proposed
preprocessing. This new preprocessing method is used to detect the lane. By using the standard KITTI
road database to evaluate the proposed method, the results obtained are superior to the existing
preprocessing and ROI selection techniques.
Key words: Lane detection, Computer vision, Intelligent Vehicles, Hough Transform Visual
Guides
Chapter 1 INTRODUCTION
1.1 INTRODUCTION
Road lane line detection is a crucial component of advanced driver assistance systems (ADAS) and
autonomous vehicles, as it plays a fundamental role in ensuring safe and precise navigation on the road.
Machine learning techniques have revolutionized the field of lane line detection, offering robust and
real-time solutions for identifying and tracking road lanes. Traditionally, lane line detection relied on
rule-based methods and image processing techniques, which often struggled to adapt to varying road
conditions and complex scenarios. However, with the advent of machine learning, particularly deep
learning models, the accuracy and reliability of lane line detection have greatly improved.
With the rapid development of society, automobiles have become one of the transportation tools for
people to travel. In the narrow road, there are more and more vehicles of all kinds. As more and more
vehicles are driving on the road, the number of victims of car accidents is increasing every year. How
to drive safely under the condition of numerous vehicles and narrow roads has become the focus of
attention. Advanced driver assistance systems which include lane departure warning (LDW), Lane
Keeping Assist, and Adaptive Cruise Control (ACC) can help people analyse the current driving
environment and provide appropriate feedback for safe driving or alert the driver in dangerous
circumstances. This kind of auxiliary driving system is expected to become more and more perfect.
However, the bottleneck of the development of this system is that the road traffic environment is
difficult to predict. After investigation, in a complex traffic environment where vehicles are numerous
and speed is too fast, the probability of accidents is much greater than usual. In such a complex traffic
situation, road colour extraction, texture detection, road boundary, and lane marking are the main
perceptual clues of human driving. Lane detection is a hot topic in the field of machine learning and
computer vision and has been applied in intelligent vehicle systems.
1.2 MOTIVATION
There are many researchers who have worked and are working on creating and developing many
techniques in intelligent transportation systems with advanced driving assistances system which are
able to ensure the safety in the roads and congested traffic conditions. The road accidents are the main
causes for the sudden death in this world. Even though we have many good and advanced techniques
in this world, we are left over with something to make it better than before. There are chances from
different angles. The road lane detection and object detection is also the other important way that we
can improve the safety in roads. Vehicle crashes remain the leading cause of accident death and
Department of Testing and Programing, Contriver Page 2
Road Lane Line Detection 2023-24
injuries in Malaysia and Asian countries claiming tens of thousands of lives and injuring millions of
people each year. Most of these transportation deaths and injuries occur on the nation’s highways.
The United Nations has ranked Malaysia 30th among countries with the highest number of fatal road
accidents, registering an average of 4.5 deaths per 10,000 registered vehicles.
It is not only limited to one country most of the traffic congested countries like U.S, India, other Asian
countries have many calculation of deaths and injuries.
processing. Analogue image processing can be used for the hard copies like printouts and
photographs. Image analysts use various fundamentals of interpretation while using these
visual techniques. Digital image processing techniques help in manipulation of the digital
images by using computers. The three general phases that all types of data have to undergo
while using digital technique are pre-processing, enhancement, and display, information
extraction.
What is an Image?
An image is defined as a two-dimensional function, F (x, y), where x and y are spatial coordinates,
and the amplitude of F at any pair of coordinates (x, y) is called the intensity of that image at that
point. When x, y and amplitude values of F are finite, we call it a digital image. In other words, an
image can be defined by a two-dimensional array specifically arranged in rows and columns. Digital
Image is composed of a finite number of elements, each of which elements have a particular value at
a particular location. These elements are referred to as picture elements, image elements, and pixels.
A Pixel is most widely used to denote the elements of a Digital Image.
The right side of this equation is digital image by definition. Every element of this matrix is called
image element, picture element, or pixel.
Black and White Image– The image which consist of only black and white colour is called Black
and White Image.
8 Bit Colour Format– It is the most famous image format. It has 256 different shades of colours in
it and commonly known as Grayscale Image. In this format, 0 stands for Black, and 255 stands for
white, and 127 stands for grey.
16 Bit Colour Format– It is a colour image format. It has 65,536 different colours in it. It is also
known as High Colour Format. In this format the distribution of color is not as same as Grayscale
image.
A 16-bit format is actually divided into three further formats which are Red, Green and Blue. That
famous RGB format.
Level 2: An ADAS that can steer and either brake or accelerate simultaneously while the driver
remains fully aware behind the wheel and continues to act as the driver.
Level 3: An automated driving system (ADS) can perform all driving tasks under certain
circumstances, such as parking the car. In these circumstances, the human driver must be ready to re-
take control and is still required to be the main driver of the vehicle.
Level 4: An ADS is able to perform all driving tasks and monitor the driving environment in certain
circumstances. In those circumstances, the ADS is reliable enough that the human driver needn't pay
attention.
Level 5: The vehicle's ADS acts as a virtual chauffeur and does all the driving in all circumstances.
The human occupants are passengers and are never expected to drive the vehicle.
Chapter 2
LITERATURE SURVEY
The first criterion should have low error rate and filter out unwanted information while the useful
information preserve.
The second criterion is to keep the lower variation as possible between the original image and the
processed image.
Third criterion removes multiple responses to an edge
Based on these criteria, the canny edge detector first smoothens the image to eliminate noise. It then
finds the image gradient to highlight regions with high spatial derivatives. The algorithm then tracks
along these regions and suppresses any pixel that is not at the maximum using non-maximum
suppression. The gradient array is now further reduced by hysteresis to remove streaking and thinning
the edges.
Figure 3.c: Original Image on top and Gaussian filtered image at bottom
Step 2: Sobel Operator
After smoothing the image and eliminating the noise, the next step is to find the edge strength by
taking the gradient of the image. The Sobel operator performs a 2-D spatial gradient measurement
on an image. Then, the approximate absolute gradient magnitude (edge strength) at each point can
be found by the formula below which is simpler to calculate compared to the above exact gradient
magnitude
Approximate gradient magnitude given below:
| | = | x| + | y|
The Sobel operator uses a pair of 3x3 convolution masks, one estimating the gradient
in the x-direction (columns) and the other estimating the gradient in the ydirection (rows).
Department of Testing and Programing, Contriver Page 11
Road Lane Line Detection 2023-24
Sobel x and y masks shown below each one estimates gradient x direction and y direction
respectively:
xx x x
xxx x
x xxx
a xxx
x x x
xxx x
x x
By looking at the center pixel "a", there are four possible directions when describing the surrounding
pixels - 0 degrees (in the horizontal direction), 45 degrees (along the positive diagonal), 90 degrees
(in the vertical direction), or 135 degrees (along the negative diagonal), 180 degrees region is just an
mirror region of 0 degrees region. Therefore, any edge direction calculated will be round up to the
closest angle.
So, any edge direction falling within the A and E (0 to 22.5 & 157.5 to 180 degrees) is set to 0 degrees.
Any edge direction falling in the D (22.5 to 67.5 degrees) is set to 45 degrees. Any edge direction
falling in the C (67.5 to 112.5 degrees) is set to 90 degrees. And finally, any edge direction falling
within the B (112.5 to 157.5 degrees) is set to 135 degrees.
value of 0.We wish to mark points along the curve where the magnitude is biggest. We can do this by
looking for a maximum along a slice normal to the curve (non-maximum suppression). These points
should form a curve. There are then two algorithmic issues: at which point is the maximum, and where
is the next one?
Step 6: Hysteresis
Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking up of an edge
contour caused by the operator output fluctuating above and below the threshold. If a single threshold,
T1 is applied to an image, and an edge has an average strength equal to T1, then due to noise, there
will be instances where the edge dips below the threshold. Equally it will also extend above the
threshold making an edge look like a dashed line. To avoid this, hysteresis uses 2 thresholds, a high
and a low. Any pixel in the image that has a value greater than T1 is presumed to be an edge pixel, and
is marked as such immediately. Then, any pixels that are connected to this edge pixel and that have a
value greater than T2 are also selected as edge pixels. If you think of following an edge, you need a
gradient of T2 to start but you don't stop till you hit a gradient below T1.
is explicitly constructed by the algorithm for computing the Hough transform.The classical Hough
transform was concerned with the identification of lines in the image, but later the Hough transform
has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. The
Hough transform as it is universally used today was invented by Richard Duda and Peter Hart in 1972,
who called it a "generalized Hough transform" after the related 1962 patent of Paul Hough. The
transform was popularized in the computer vision community by Dana H Ballard through a 1981
journal article titled "Generalizing the Hough transform to detect arbitrary shapes"
It is therefore possible to associate with each line of the image a pair (r, θ). The (r, theta) plane is
sometimes referred to as Hough space for the set of straight lines in two dimensions.
The linear Hough transform algorithm uses a two-dimensional array, called an accumulator, to
detect the existence of a line described by r=x cos theta+ y sin theta. The dimension of the
accumulator equals the number of unknown parameters, i.e., two, considering quantized values of
r and θ in the pair (r, θ). For each pixel at (x, y) and its neighborhood, the Hough transform
algorithm determines if there is enough evidence of a straight line at that pixel. If so, it will
calculate the parameters (r, θ) of that line, and then look for the accumulator's bin that the
parameters fall into, and increment the value of that bin. By finding the bins with the highest values,
typically by looking for local maxima in the accumulator space, the most likely lines can be
extracted, and their (approximate) geometric definitions read off. (Shapiro and Stockman, 304)
The simplest way of finding these peaks is by applying some form of threshold, but other
techniques may yield better results in different circumstances – determining which lines are found
as well as how many.
Fernandes and Oliveira suggested an improved voting scheme for the Hough transform that allows
a software implementation to achieve real-time performance even on relatively large images (e.g.,
1280×960). The Kernel-based Hough transform uses the same (r, theta) parameterization proposed
by Duda and Hart but operates on clusters of approximately collinear pixels. For each cluster, votes
are cast using an oriented Elliptical-Gaussian kernel that models the uncertainty associated with the
best-fitting line with respect to the corresponding cluster. The approach not only significantly
improves the performance of the voting scheme, but also produces a much cleaner accumulator and
makes the transform more robust to the detection of spurious lines.
Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the software and
different parts of the computer or the computer hardware. The operating system is designed in
such a way that it can manage the overall resources and operations of the computer.
We have used Windows 11 OS for our project
Processor: To handle vast amounts of image data and perform sophisticated machine learning
algorithms, a quick and powerful CPU is required. A processor from Intel Core i5 or i7 series or
a similar would work.
Memory (RAM): A considerable amount of image data and machine learning models must be
stored and processed, hence a sufficient amount of RAM is needed. Although 16GB or more may
be needed for apps that demand additional memory, 8GB is the least amount that is advised.
Storage: To store video and image data, machine learning models, and other software
components, there must be enough storage space. Fast data access and processing require an SSD
with at least 256GB of storage space.
Digital image processing consists of the manipulation of images using digital computers. Its use
has been increasing exponentially in the last decades. Its applications range from medicine to
entertainment, passing by geological processing and remote sensing. Multimedia systems, one of
the pillars of the modern information society, rely heavily on digital image processing.
4.2 ARCHITECTURE
System architecture of a road detection from a single image using computer vision consists mainly
the image which can be sent to a model and the output which consists of marking of detections of a
road. The system architecture starts by selecting a required image which is captured by driving camera
of a self-driving car. This should consist of all the details along with the road which should be detected
by the computer. This image should be sent to the model. All these algorithms are compounded
together to form a edge. detection model by Canny’s process. The selected image is first sent to
Canny’s model and edges are found in an image. This edge detected image is then sent into road line
detection model which is formed using Hough Transform Space algorithm. Hough transform space
algorithm normalizes the sent image and then changes the value of in the normalized trigonometric
line equation and thus detects the required road lines in an image. The only image with edges is sent
into the Hough Transform Space algorithm, because the image with more noise takes large time for
calculation rather than the image consists only edges of an image.
Image showing an architecture of a project: Road Detection from an Image using ComputerVision
This process consists of mainly four functions in it. The image can be of any type which consists of
change in intensity in it. The change in intensity of pixels in an image defines the edges in an image.
The canny edge detection mainly focuses on change in intensity in an image. The change in pixel’s
intensities from high intensity to low intensity is known as edge. At first, the color image is changed
into black and white image and is passed to smoothening technique. We use Gaussian blur as a
smoothening technique followed by gradient calculations, Non- maximum suppression and double
threshold. Edge detection mainly use derivatives of pixel intensities in an image and then reduce the
complexity of an image. The edge is detected when there is change in intensity from high
low which refers white shades to black shades (in gray scale image) in an image. Gray Scale image
is used because it would be easy to process the gray scale image than the colored image. A gradient
calculation process is used for calculating the through Sobel filters.Non-Maximum suppression is
a process of thinning of edges that should be occurred in a required output image. Then a double
thresholding is done to intensify the strong pixels of an output image and to close the intensity of
weaker pixels in an image.
Road Lane Line Detection from a single image using Computer Vision consists of image insertion,
model building and then testing. The model evaluation is done manually by the developer. We
divided the complete project into mainly four modules. They are as follows:
Module 1: Selecting the appropriate testing image.
Module 2: Preprocessing the selected image.
Module 3: Edge Detection Implementation.
It is the most important process in the project. Single Image from testing dataset is taken such a
way that it reaches our implementation of a model. Each model we implement takes a resultant
image as an input and process it further to produce an output. This selection of image is more
important because implementation of each model requires an image input for processing.
The next step in the process if edge detection, which is main part in the program and required to
detect the edges in the image irrespective of details present in an image. We use Canny Edge
Detection Algorithm to implement the edge detection techniques because the other processes
which are also used to find the edges in an image would contain detailed images compared to
Canny Edge Detection Technique. Canny Edge Detection technique mainly consists of four
processes in it.
We can analytically describe a line segment in a number of forms. However, a convenient equation
for describing a set of lines uses parametric or normal notion:
X cos t + Y sin t = r
Module 5: Evaluating the output
Evaluation of output is done through confusion matrix and the accuracy metrics when a testing
dataset is passed.This forms a confusion matrix and then accuracy, precision scores can be noted.
This process is known as Evaluation. Thus, all the accuracy and precision of a model are noted.
Chapter 5
IMPLEMENTATION
5.1 PSEUDO-CODE
1. grayscale(img):
- This function takes an input image `img` and converts it to grayscale using OpenCV's
`cv2.cvtColor` function. Grayscale images have only one channel, representing the intensity of each
pixel.
2. canny(img, low_threshold, high_threshold):
- This function applies the Canny edge detection algorithm to the input image `img` with
specified low and high threshold values. It detects edges in the image, highlighting sudden changes
in intensity.
3. gaussian_blur(img, kernel_size):
- Applies Gaussian blur to the input image `img` using a specified kernel size. Gaussian blurring
helps reduce noise in the image, which is important for edge detection.
4. region_of_interest(img, vertices):
- Masks the input image `img` to keep only the region defined by the polygon specified by
`vertices`. This function is used to focus on a specific region of interest, typically the area where lane
lines are expected.
5. draw_lines(img, lines, color, thickness):
- This function is used to draw lines on an image. It takes a list of lines, where each line is
represented as a pair of points (x1, y1) and (x2, y2), and draws them on the input image `img`. The
lines are drawn in the specified `color` with the given `thickness`.
6. Main script:
import matplotlib.pyplot as plt import
matplotlib.image as mpimg import numpy
as np import cv2 from moviepy.editor
import VideoFileClip import math import
os
low_threshold = 50 high_threshold
= 150
rho = 2
grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale you
should call plt.imshow(gray, cmap='gray')""" return
cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Run linear regression to find best fit line for right and left lane lines
# Right lane lines
right_lines_x = []
right_lines_y = []
right_lines_x.append(x1)
right_lines_x.append(x2)
right_lines_y.append(y1)
right_lines_y.append(y2)
if len(right_lines_x) > 0:
right_m, right_b = np.polyfit(right_lines_x, right_lines_y, 1) # y = m*x + b else:
right_m, right_b = 1, 1
draw_right = False
left_lines_x.append(x1)
left_lines_x.append(x2)
left_lines_y.append(y1)
left_lines_y.append(y2)
if
len(left_lines_x) > 0:
# Find 2 end points for right and left lines, used for drawing the line
# y = m*x + b --> x = (y - b)/m y1
= img.shape[0] y2 = img.shape[0] *
(1 - trap_height)
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len,
maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8) # 3-channel RGB image
draw_lines(line_img, lines) return line_img
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
def filter_colors(image):
"""
Filter the image to include only yellow and white pixels
"""
# Filter white pixels white_threshold = 200 #130 lower_white =
np.array([white_threshold, white_threshold, white_threshold]) upper_white =
np.array([255, 255, 255]) white_mask = cv2.inRange(image, lower_white,
upper_white) white_image = cv2.bitwise_and(image, image,
mask=white_mask)
return image2
def annotate_image_array(image_in):
""" Given an image Numpy array, return the annotated image as a Numpy array """
# Only keep white and yellow pixels in the image, all other pixels become black
image = filter_colors(image_in)
return annotated_image
input_file = options.input_file
output_file = options.output_file
image_only = options.image_only
if image_only:
annotate_image(input_file, output_file)
else:
annotate_video(input_file, output_file)
This image contains a white curved lane which later will be processed to annotate the lanes.
This image is processed image of figure 6.1 in which the road lanes are marked with red lines
This image is processed image of figure 6.3 in which the road lanes are marked with red lines
This image contains a yellow curved lane which later will be processed to annotate the lanes.
This image contains a yellow curved lane which later will be processed to annotate the lanes. This is an
second example of yellow lanes.
This image contains a yellow straight lane which later will be processed to annotate the lanes.
CONCLUSION
In conclusion, road lane line detection using machine learning represents a significant advancement in
the field of computer vision and autonomous driving. This technology plays a crucial role in enhancing
road safety, enabling self-driving vehicles, and improving the overall driving experience. Through the
utilization of various machine learning algorithms and techniques, such as convolutional neural
networks (CNNs) and image processing, accurate lane line detection can be achieved even in
challenging and dynamic real-world conditions.
FUTURE SCOPE
The future scope of road lane line detection using machine learning is exceptionally promising, with
a wide range of potential applications and advancements on the horizon. One of the most significant
areas of impact lies in the realm of autonomous vehicles. As self-driving technology continues to
evolve, lane detection will play a crucial role in ensuring the safe and reliable navigation of these
vehicles. Enhanced accuracy and robustness in lane detection systems will be essential for autonomous
cars to handle various real-world scenarios. Moreover, advanced driver assistance systems (ADAS)
will see further improvements, benefiting from more sophisticated lane detection algorithms to
enhance driver safety and comfort. The integration of multiple sensors, such as radar, lidar, and V2X
communication, will provide a holistic understanding of the road environment, contributing to safer
and more efficient transportation.
BIBLIOGRAPY
• Simplilearn.ML Tutorial
• Scikit-learn: Machine Learning in Python.
• Pandas Documentation: powerful data analysis tools for Python.
• " Road Lane-Line Detection " Dataset. Kaggle.
• Bird, S., Klein, E., & Loper, E. (2009). Natural Language Processing with Python. O'Reilly
Media.
• Gonzalez, R.C., and Woods, R.E. (2008). "Digital Image Processing." Pearson Education.
• Sonka, M., Hlavac, V., and Boyle, R. (2007). "Image Processing, Analysis, and Machine
Vision." Cengage Learning.
• Jain, A.K. (1989). "Fundamentals of Digital Image Processing." Prentice-Hall.
• Prince, S.J.D. (2012). "Computer Vision: Models, Learning, and Inference." Cambridge
University Press.
• Burger, W., and Burge, M.J. (2016). "Digital Image Processing: An Algorithmic Introduction
Using Java." Springer.
• Robust Drivable Road Region Detection for Fixed-Route Autonomous Vehicles Using
MapFusion Images. Authors - Yichao Cai, Dachuan Li, Xiao Zhou, Xingang Mou
https://pubmed.ncbi.nlm.nih.gov/30486408/
• Lane-GAN: A Robust Lane Detection Network for Driver Assistance System in High Speed
and Complex Road Conditions. Authors - Yan Liu , Jingwen Wang , Yujie Li, Canlin Li,
Weizheng Zhang. https://pubmed.ncbi.nlm.nih.gov/35630183/
• Lane Detection Algorithm for Intelligent Vehicles in Complex Road Conditions and
Dynamic Environments. Authors - Jingwei Cao , Chuanxue Song , Shixin Song , Feng Xiao, Silun
Peng https://pubmed.ncbi.nlm.nih.gov/31323875/
• A computer vision-based lane detection technique using gradient threshold and huelightness-
saturation value for an autonomous vehicle. Authors - Almukhtar Firas HushamLI
ZhaiMiah ShahajanNoman Md. Abdullah AlOmarov BatyrkhanRahaman Md. FaishalRay
SamratWang Chengping. https://core.ac.uk/works/129594808