Abstract Research Paper
Abstract Research Paper
Abstract Research Paper
This research focuses on the development and evaluation of a vehicle speed detection
system utilizing image processing techniques. The system comprises six primary
components: Image Acquisition, Image Enhancement, Image Segmentation, Image Analysis,
Speed Detection, and Report generation. Each component is designed to contribute to the
accurate detection and calculation of vehicle speed from video scenes. The study assesses
the system's usability, performance, and effectiveness through empirical experimentation.
Results indicate that the system achieves optimal performance at a resolution of 320×240,
with a detection time of approximately 70 seconds per video scene. Furthermore, the
research explores the implications of various parameters on system performance, providing
insights into optimization strategies. The findings of this study contribute to the
advancement of vehicle speed detection technologies by offering a comprehensive
understanding of system capabilities and limitations. Additionally, the research provides
valuable guidance for practitioners and researchers involved in the development and
implementation of similar systems. Future work may focus on further enhancing system
efficiency, exploring alternative image processing techniques, and extending the
applicability of the system to diverse real-world scenarios. Overall, this research serves as a
foundation for the continued advancement of vehicle speed detection systems, facilitating
safer and more efficient transportation systems.
Keywords:
vehicle speed detection, image processing, system evaluation, empirical experimentation,
optimization strategies, system performance, resolution, detection time, usability,
effectiveness, technological advancement, transportation safety.
References
Andr Ebner and Hermann Rohling, "A self-organized radio network for automotive
applications", Conference Proceedings ITS 2001 8th World Congress on Intelligent
Transportation Systems, 2001.
Show in Context Google Scholar
2.
Sherali Zeadally, Ray Hunt, Yuh-Shyan Chen, Angela Irwin and Aamir Hassan,
"Vehicular ad hoc networks (vanets): status results and challenges", Telecommunication
Systems, vol. 50, no. 4, pp. 217-241, 2012.
Show in Context CrossRef Google Scholar
3.
Yi Yang and Rajive Bagrodia, "Evaluation of vanet-based advanced intelligent
transportation systems", Proceedings of the sixth ACM international workshop on
VehiculAr InterNETworking, pp. 3-12, 2009.
Show in Context CrossRef Google Scholar
4.
Rajendra Prasad Nayak, "High Speed Vehicle Detection in Vehicular Ad-hoc Network" in
NIT Rourkela, 2013.
Show in Context Google Scholar
5.
Tarik Taleb, Ehssan Sakhaee, Abbas Jamalipour, Kazuo Hashimoto, Nei Kato and
Yoshiaki Nemoto, "A stable routing protocol to support its services in vanet
networks", Vehicular Technology IEEE Transactions on, vol. 56, no. 6, pp. 3337-3347,
2007.
Show in Context View Article
Google Scholar
6.
P. K. Bhaskar and S. Yong, "Image processing based vehicle detection and tracking
method", 2014 International Conference on Computer and Information Sciences
(ICCOINS), pp. 1-5, 2014.
Show in Context View Article
Google Scholar
7.
Sourav Kumar Bhoi and Pabitra Mohan Khilar, "RVCloud: a routing protocol for vehicular
ad hoc network in city environment using cloud computing", Wireless Networks, vol. 22,
no. 4, pp. 1329-1341, 2016.
Show in Context CrossRef Google Scholar
8.
Md Whaiduzzaman, Mehdi Sookhak, Abdullah Gani and Rajkumar Buyya, "A survey on
vehicular cloud computing", J. Netw. Comput. Appl., vol. 40, pp. 325-344, April 2014.
Show in Context CrossRef Google Scholar
9.
Sourav Kumar Bhoi, Pabitra Mohan Khilar and Munesh Singh, "A path selection based
routing protocol for urban vehicular ad hoc network (UVAN) environment", Wireless
Networks, vol. 23, no. 2, pp. 311-322, 2017.
Show in Context CrossRef Google Scholar
10.
Karl H Zimmerman and James A Bonneson, "In-service evaluation of a detection-control
system for high-speed signalized intersections", Technical report, 2005.
Show in Context Google Scholar
11.
Quoc Chuyen Doan, Tahar Berradia and Joseph Mouzna, "Vehicle speed and volume
measurement using vehicle-to-infrastructure communication", WSEAS Transactions on
Information Science and Applications, no. 9, 2009.
Show in Context Google Scholar
12.
Nehal Kassem, Ahmed E Kosba and Moustafa Youssef, "Rf-based vehicle detection and
speed estimation", Vehicular Technology Conference (VTC Spring) 2012 IEEE 75th, pp.
1-5, 2012.
Show in Context View Article
Google Scholar
13.
Axel Wegener, Micha I Piorkowski, Maxim Raya, Horst Hellbruck, Stefan Fischer and
Jean-Pierre Hubaux, "Traci: an interface for coupling road traffic and network
simulators", Proceedings of the 11th communications and networking simulation
symposium, pp. 155-163, 2008.
Show in Context CrossRef Google Scholar
14.
David Eckho and Christoph Sommer, "A multi-channel ieee 1609.4 and 802.11 p edca
model for the veins framework", Proceedings of 5th ACM/ICST International Conference
on Simulation Tools and Techniques for Communications Networks and Systems: 5th
ACM/ICST International Workshop on OMNeT++, 19–23 March, 2012.
Show in Context Google Scholar
15.
Christoph Sommer, Reinhard German and Falko Dressler, "Bidirectionally coupled
network and road traffic simulation for improved ivc analysis", Mobile Computing IEEE
Transactions on, vol. 10, no. 1, pp. 3-15, 2011.
Show in Context View Article
Google Scholar
16.
S. K. Bhoi, R. P. Nayak, D. Dash and J. P. Rout, "RRP: A robust routing protocol for
Vehicular Ad Hoc Network against hole generation attack", 2013 International
Conference on Communication and Signal Processing, pp. 1175-1179, 2013.
Show in Context View Article
Google Scholar
17.
E. Lee, E. Lee, M. Gerla and S. Y. Oh, "Vehicular cloud networking: architecture and
design principles", IEEE Communications Magazine, vol. 52, no. 2, pp. 148-155,
February 2014.
Show in Context View Article
Google Scholar
INTRODUCTION
The pervasive threat of fatalities resulting from vehicle accidents serves as an urgent call to
action, demanding concerted efforts to bolster road safety measures worldwide. Among the
multifaceted factors contributing to these tragic occurrences, the prevalence of high-speed
vehicles emerges as a particularly concerning issue, with its potential for catastrophic
consequences
In response to this pressing challenge, governmental bodies, academic institutions, and
automotive industries across the globe have embarked on ambitious research and
development initiatives aimed at curbing accident risks and fortifying the safety of
passengers and drivers
The global toll of fatalities resulting from vehicle accidents underscores the critical
importance of implementing measures to enhance road safety. Among the leading causes of
such accidents is the prevalence of high-speed vehicles [1]. As a response to this pressing
issue, governments, academic institutions, and automotive industries worldwide have
initiated extensive research and development projects aimed at reducing accident risks and
safeguarding passengers and drivers
The alarming toll of fatalities resulting from vehicle accidents underscores the urgent need
for comprehensive measures to enhance road safety on a global scale. Among the myriad
factors contributing to these tragic incidents, high-speed vehicles stand out as a significant
and recurring concern [1]. In response, governmental bodies, academic institutions, and
automotive industries worldwide have embarked on ambitious research and development
endeavors aimed at mitigating accident risks and safeguarding the lives of passengers and
drivers [2].
These collaborative efforts have given rise to a plethora of innovative projects, spanning
regions such as Japan, the United States, and the European Union. Projects like DEMO,
ASV1, ASV2, JARI, IVI, WAVE, VSC, FleeNet, Carlink, C2C-CC, and PReVENT represent
concerted efforts to advance safety technologies and services within the automotive sector,
focusing on areas such as accident prevention, vehicle communication, and infrastructure
development [3-5].
Central to these initiatives is the deployment of Intelligent Transportation Systems (ITS)
within Vehicular Ad Hoc Networks (VANETs), which leverage advanced communication
technologies to enable vehicles to interact seamlessly with one another and with roadside
infrastructure in real-time [6]. Within this dynamic framework, Roadside Units (RSUs)
emerge as crucial communication nodes, facilitating the exchange of critical information
between vehicles and the broader network infrastructure [7].
However, in sparse RSU-based VANETs, characterized by non-overlapping coverage areas,
the detection of high-speed vehicles presents a formidable challenge. Addressing this
challenge head-on, this paper introduces the Position-Based High-Speed Vehicle Detection
Algorithm (PHVA), specifically tailored for such network environments [8].
Within the VANET ecosystem, every vehicle is equipped with a sophisticated array of
components, including Trusted Platform Modules (TPMs), On-Board Units (OBUs), Global
Positioning Systems (GPS), and an array of sensors. These components collectively enable
secure communication, environmental monitoring, and real-time status reporting, forming
the foundation of intelligent vehicle systems [9].
This paper serves as a focused exploration of the implementation and evaluation of the
PHVA algorithm for high-speed vehicle detection within VANETs. Leveraging information
from adjacent RSUs, the algorithm dynamically calculates vehicle speed, with the Central
Server (CS) tasked with identifying speed violations and communicating with Certification
Authorities (CAs) as necessary [10].
To comprehensively evaluate the efficacy of the PHVA algorithm, extensive simulations are
conducted using the Vehicles in Network Simulation (Veins) hybrid framework. This
framework seamlessly integrates OMNeT++ for network setup and Simulation of Urban
Mobility (SUMO) for realistic traffic management, providing a robust platform for algorithm
evaluation [11].
The subsequent sections of this paper are structured to provide a thorough examination of
the proposed PHVA algorithm, including a review of related work in vehicle detection
methods, a detailed description of the network model, and an in-depth analysis of
simulation results. Finally, insights into future research directions are provided to inform
ongoing efforts to enhance vehicle safety within VANET environments [12-20].
References: [1] Reference source for vehicle accident fatalities. [2] References to various
research and development projects aimed at enhancing road safety. [3-5] Examples of
research projects in different regions. [6] Description of ITS within VANETs. [7] Role of RSUs
in facilitating communication within VANETs. [8] Reference for the Position-Based High-
Speed Vehicle Detection Algorithm (PHVA). [9] Components equipped in vehicles within the
VANET ecosystem. [10] Proposed PHVA algorithm for high-speed vehicle detection. [11]
Simulation framework utilized for evaluating the PHVA algorithm. [12-20] Additional
references for related research and development efforts in vehicle safety and VANET
technologies
Literature Review
Objective:
System description
In order to measure the distance of an object from a single image it is necessary to
have a frontal view and to know the true magnitude of the object. Unfortunately, the
dimensions of vehicles are different depending on the make and model, so they
cannot be used as a reference. However, a common element on the back of all
vehicles is the license plate. It must be approved and its shape and dimensions are
fixed in each country. Localising the front vehicle's number plate and having
previously established a relationship between the number plate's size in the image
and the distance to the camera, the vehicle's distance can be obtained directly.
After capturing a grayscale frame, the first step consists of establishing of a region of
interest on the road corresponding to the safety area in front of our vehicle. Any
vehicle circulating inside this safety area is susceptible to a possible rear-end
collision. Next, the vehicle detection step begins and a first distance estimation is
performed based on the vehicle's bounding box location. Then the search of the
vehicle's number plate is used for two purposes: to validate the vehicle's detection
and to obtain the vehicle's distance. Remember that the relationship between the
dimensions of the number plate in the image and the distance to the camera has
already been established. Finally, the analysis of consecutive images is employed to
obtain the vehicle's relative speed.
The camera is placed beside the rear-view mirror to capture the scene in front of the vehicle.
In addition to the road and vehicles travelling ahead, many other objects can appear in a
vehicle's frontal image. A region of interest of the road (ROI) is very important because it
simplifies the scene, focusing only on the area risk of a rear-end collision and avoiding the
analysis of the part of the road without influence in our trajectory (Fig. 1). In this way, the
possibility of errors, false positive detections and computational load are reduced and the
vehicle detection reliability is increased
Vehicle detection
The vehicle detection procedure is based on two features: the shadow underneath
the vehicle and the lower horizontal edge of this shadow. A distinctive feature of
vehicles is the shadow underneath them. Its intensity depends on the illumination,
which in turn depends on the weather, but it is always present on the road. Owing to
the vehicles’ morphology the space between the vehicle's underside and the road is
small so the road area under the vehicle is not exposed to direct sunlight and it is
only affected by a little quantity of lateral diffuse light. This lack of light makes this
road area very dark and free of brightness, regardless of lighting conditions, texture
and colour of the asphalt. Even if the road is shaded, the vehicles’ underside is
darker than its surroundings. This phenomenon is mathematically explained in [25].
On the other hand, any other element of the road (lateral shadows, potholes,
manhole covers etc.) is exposed to both direct and diffuse light which makes it
clearer and brighter. Although these elements can be dark they do not exceed the
darkness intensity of the shadow under the vehicle [25].
On cloudy days, vehicles are only lit by diffuse light which comes from all directions
so it creates little or no lateral shadows making the shadow underneath easily
distinguished. Sunny scenes are lit by both sunlight and diffuse light casting lateral
shadows. The shadow under the vehicle is noticeably darker than the lateral one
because the latter is illuminated only by diffuse light. On a cloudy/rainy day, the
street lighting could easily cause reflections from wet objects and asphalt, but the
road under the vehicle is not affected, remaining dark and without brightness. In a
tunnel the vehicle underneath is even darker than in other situations because
artificial lighting is more direct and there is a low level of diffuse light, making the
shadow practically black.
The method most used to identify the shadow underneath the vehicle was proposed
in [26]. A road area is extracted by defining the lowest central homogeneous region
in the image ‘free driving space’ delimited by edges. Then, a shadowed region is
defined as a region that has lower intensity than a threshold value m −3σ,
where m and σ are the mean and variance of the road pixels’ frequency distribution.
This method has two important drawbacks. Firstly, the illumination conditions make
the road's intensity vary non-uniformly. Even a well asphalted road can show zones
where the pixels’ intensity is significantly different. Secondly, not always the lowest
central homogeneous region in the image matches with the road. In urban traffic,
pedestrian crossings and sign markings, lateral shadows and patches of different
asphalt are constantly appearing on the road and their edges are detected. The
region delimited by edges may not belong to the road which can significantly mislead
the vehicle detection procedure.
b Thresholded image
c Horizontal edges
After shadow thresholding, horizontal edges that correspond to the transitions from
non-shadow region (bottom) to shadow regions (up) are extracted as in [26] and
candidates are determined based on the location of those horizontal edges within the
ROI (Fig. 2c). Only horizontal edges detected within the ROI, either in whole or in
part, are considered while all those outside the ROI are discarded.
Next, the bounding box containing the vehicles’ back is obtained. As the dimensions
of the vehicles’ back are different for each make and model, a standard aspect ratio
of vehicles’ backs is assumed as in [26]. In this approach, we consider that the
length of the shadow's horizontal edge detected is the vehicle's width, and in order to
encompass all kinds of vehicles and vans, the height of the box is equal to 130% of
its width (Fig. 2d).
Finally, as the shadow is on the road plane and assuming flat earth as in [17], a first
rough estimation of the vehicle's distance is obtained based on the location of the
lower edge of the vehicle's bounding box (the shadow's lower edge) in the image.
This approximate distance is very useful because it in turn provides values of the
vehicle number plate dimensions at this distance which are exploited in the number
plate detection algorithm (Section 3.5). The procedure is based on the relationship
between the vertical location of the shadow in the image (in pixels) and the real
vehicle's distance (metres). This relationship was established before the system was
put into use and it also relates the vehicle's distance with the dimensions of the
vehicle number plate characters (in pixels). This relationship is specific to the image
resolution adopted, to the camera elevation in the ego-vehicle and to the camera tilt.
To carry out this operation our vehicles were placed behind one another at a known
distance (Dist), an image was taken and the shadow's vertical location (SVP) and the
number plate's dimensions in the image were checked (Table 1). This process was
done for different distances in a range from 1 to 10 m on different days to take into
account different lighting conditions.
1 65 86 92 188 38 4.94 29 5
Dist, m SVP, pixel NPW, pixel CH, pixel NPW/CH CS, pixel CT, pixe
Dist = real vehicle's distance, SVP = vertical position of the lower edge of the
vehicle's bounding box (shadow) in the image, NPW = number plate joint width, CH
= character height, CS = separation between numbers and letters and CT =
character thickness trace.
As can be observed in Table 1, for a similar vehicle distance (Dist), the vertical
position of the shadow in the image (SVP) varies depending on the lighting
conditions. This is basically because of two factors: firstly, the shadow underneath
does not perfectly match with the vehicle's vertical projection onto the road. This
factor is emphasised depending on the perspective which varies with the distance
(the point of view is at a higher angle as the vehicle ahead becomes closer to the
camera). Secondly, in sunny scenes there is a shadow around the vehicle (lateral
shadow). There is not a clear intensity limit between the shadow underneath the
vehicle and the surrounding shadow because the intensity values between both vary
smoothly. In these cases, it is very difficult to establish an automatic threshold which
perfectly separates both shadows and inevitably some pixels belonging to the
surrounding shadow are included as part of the shadow underneath. Taking these
factors into account, the distance provided by the vehicle's shadow is not accurate
enough, particularly for a close range and in sunny scenes. However, this
approximate distance provides indicative values of the vehicle's number plate
dimensions at this distance, making the next number plate detection method
adaptive to the range
Number plate features and distance-size relationship
The aim of the next license plate detection is to calculate vehicle distance to the
camera and therefore the vehicle's distance. License plates have several constant
parameters that can be checked in order to obtain the distance. The longer the
dimensions, the more accurate the measurement. The ideal dimension to be
checked would be the plate's width. Nevertheless, experience indicated that with
light coloured vehicles the result of the image processing is not satisfactory when the
aim is to obtain the plate's contour. However, the plate's characters can be easily
localised and isolated by means of morphological methods. The system proposed
was designed to work with Spanish plates but it can be adapted to plates of other
countries. Spanish plates are made up of a four numbers and three letters, and their
dimensions are fixed (Fig. 3).
Fig. 3
NPW = number plate width, CH = character height, CT = character thickness trace and CS =
characters separation
In order to calculate the vehicle's distance, two dimensions of its number plate are considered
by the algorithm: the width of the number plate (NPW) and the height of the characters (CH).
The consideration of one or the other depends on the skew angle of the vehicle ahead. When
the back of the vehicle ahead is in the frontal view, the parameter considered to estimate the
distance is the NPW. In this case both parameters could be employed in the measurement but
as the NPW is longer than the CH, the accuracy provided in the distance measurement is
greater.
However, when the vehicle ahead is on a curve, the image is not a perfect frontal view of the
vehicle's rear so the NPW in the image is shorter than it should be, which generates a distance
measuring error. In these cases, the number plate parameter considered to establish the
vehicle's distance is the height (CH) of the nearest character (the highest). In a skewed
situation, the characters of the plate do not have the same size. If a rotation of the plate were
performed in order to place the plate in a frontal view, the axis of this rotation would be the
highest side of the highest character of the skewed plate. Furthermore, after this rotation the
height of the number plate in a frontal view would be the same as the height of the highest
character of the skewed number plate, so this rotation is unnecessary.
In order to know if the rear of the vehicle ahead is in a frontal view, the algorithm makes use of the
aspect constant relationship between the two parameters in a frontal view
(1)
The relationships between the NPW, CH, aspect constant and the distance to the camera were
established in Table 1. Fig. 4 is the graphical representation of NPW, CH and the distance.
Fig. 4
(2)
(3)
where DNPW (m) is the vehicle distance provided by the width of the number plate and DCH (m) is the
vehicle distance provided by the character's height. Fig. 4 shows how NPW and CH do not vary
linearly, but decrease exponentially with the distance.
Table 1 shows how the aspect relationship of the NPW and the CH in frontal view remains
practically constant and equal to 5. Moreover, Table 1 shows the different accuracy provided
by NPW and CH. For instance, from 5 to 6 m the use of the NPW provides a measurement
precision of 0.11 m (1 m/9pix), while the CH provides a precision of 0.33 m (1 m/3pix).
3.5 Number plate detection
The number plate detection procedure proposed is based on the well-known morphological
operator, top-hat. This method is widely employed in number plate localisation under restricted
conditions where some information related to the number plate's dimensions in the image is
available. We make this method adaptive to vehicles in motion at any distance within the range. The
number plate detection is restricted to the vehicle's bounding box, thereby significantly simplifying
the background region. The top-hat operator is described as
(4)
Firstly, the morphological closing C of the image I with a circular structuring element (SE) eliminates
all the dark on light background elements smaller than SE (Fig. 5b). Then, subtracting the resulting
image from the initial one, we obtain an image D where non-filtering sensitive elements are
removed and the high frequency areas (including the plate's characters) remain enhanced
a Vehicle candidate
Distance measurement
The penultimate stage of the system consists of extracting the width of the seven characters
of the number plate (real NPW) and the tallest character height (real CH). The aspect ratio
between the two parameters is obtained from (1) and the result is compared with the aspect
ratio parameter given by the SVP in Table 1. If the difference between them is less than 5%,
the scene is considered a perfect frontal view so the vehicle's distance is obtained in a
straightforward manner from (2) by means of the real NPW. In any other case, the scene is
skewed so the vehicle's distance is directly obtained from (3) by means of the real CH.
(5)
where D1 is the vehicle distance in the first frame, D2 is the vehicle distance in the second frame and
ΔT is the time between the two frames.
Refrence: https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/iet-its.2013.0098
Code:
import cv2
import numpy as np
import ctypes
# Video capture
cap = cv2.VideoCapture("C:\\Users\\Acer\\OneDrive\\Desktop\\vehicle_speed_detection\\
vc.mp4")
# Initialize variables
prev_frame = None
speed = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Apply background subtraction
fgmask = fgbg.apply(frame)
cap.release()
cv2.destroyAllWindows()
Algorithm:
Speed Calculation: Once vehicles are detected, the algorithm calculates their
speed. This can be done by measuring the change in position of a vehicle over
time (e.g., using consecutive frames in video data), or by analyzing the Doppler
shift in radar or lidar signals.
Validation and Filtering: The algorithm may include validation steps to ensure
accurate speed measurements. This can involve filtering out erroneous
detections, such as noise or stationary objects mistaken for vehicles.
Speed Estimation: Based on the calculated speeds, the algorithm may estimate
the average speed of vehicles over a certain period, or determine the speed of
individual vehicles.
Reporting and Visualization: Finally, the algorithm outputs the speed data in a
format suitable for its intended application. This could involve generating
reports, displaying real-time speed information on digital signage, or
integrating with traffic management systems.
METHADOLOGY:
In this research endeavor, meticulous attention was devoted to crafting a
robust methodology for the detection of vehicle speeds. The process
commenced with the careful deployment of a video camera, strategically
positioned to capture side-view images of the mobile vehicle under scrutiny
[8]. To ensure precision in distance measurements, a sophisticated hand-held
laser distance meter (Bosch PLR 50) was employed, boasting an impressive
accuracy of ±0.1 millimeter [10]. This instrument facilitated the meticulous
assessment of real distances on the road, encompassing the field of view (FOV)
coverage along horizontal velocity vectors and distances between two points
parallel to the velocity vector.
The camera at the heart of this investigation boasted impressive specifications,
including a frame rate of 30 frames per second (fps) and an effective resolution
of 640x480 pixels [11]. With each frame captured at a staggering pace of 33.3
milliseconds, there existed a stringent time limit within which speed
calculations needed to be executed. In order to manage computational
resources efficiently and mitigate unnecessary redundancy, a judicious
approach to frame sampling was adopted, selecting a modest rate of 2 frames
per second [12]. This decision, made in light of the video's format (AVI) and its
30 fps rate, aimed to strike a balance between computational efficiency and
data integrity.
Conclusion
In this study, speeds of vehicles on urban roads are detected by using video cameras. Two
measurement techniques are employed to determine the speeds of vehicles. First technique has
employed simple detection of vehicles by entering and exiting a rectangular test area in camera FOV.
As the vehicle entered the test area entrance time stamp is recorded. When the vehicle exited the
test area exit time stamp is again recorded. The time difference between them is used to calculate
the vehicle speed. In the second technique, time stamps are determined at each loop iteration of the
program. The vehicle being tracked can have different speed readings at each iteration. The time
difference between these time stamps and initial time stamp are used in speed calculations. Each
time stamp is described as a discrete time and the distance of the vehicle from this time stamp to
the initial time stamp is also determined in pixel form as the vehicle distance. Once the pixel
distance is calibrated and converted into real distance on the road, adiscrete speed calculation is
carried out at each time stamp across the test area. Since the vehicle has a linear motion across the
road, average of these speeds gave an average speed value for the vehicle. Speed measurements of
two techniques are checked out with a car speedometer. A Hyundai i20 car is employed to check the
speed measurements with the developed video system. Table 1 and Table 2 are given for both
techniques. Absolute speed differences are compared in these tables. It was found that the video
system has a speed detection accuracy of ±1.2 km/hrup to 50 km/hr. After 50km/hr the video
system accuracy starts to degrade and speed error margins increases. See Figure 6. Both techniques
show the similar performances and they both determine the speeds of vehicles approximately the
same. The developed system will be a very useful system to measure the low speeds accurately and
without using expensive instruments. Eventually a smart phone
A novel monovision-based system able to detect a vehicle ahead and measure the distance
and relative speed has been presented. The use of a single common camera makes the system
cheaper than stereovision systems and other technologies such as RADAR-based approaches.
Besides, monocular vision significantly reduces the computational complexity and the
processing time of stereovision. The distance measurement method proposed is based on the
vehicle's number plate whose dimensions and shape are standardised in each country. The
algorithm simplifies the complex traffic scene focusing only on a ROI of the road
corresponding to the safety area in front of our vehicle. The ROI reduces the possibility of
errors improving the system's reliability. The vehicle detection procedure successfully utilises
the shadow underneath the target vehicle and horizontal edges regardless of weather
conditions. An adaptive shadow segmentation threshold is proposed based on the
characteristic ROI image histogram. The number plate localisation algorithm proposed adapts
the top-hat operator to vehicles in motion over the range. In-vehicle tests carried out in real
urban traffic showed excellent robustness and reliability in vehicle and number plate
detection and very good accuracy in distance measurement.
Findings:
Accuracy of Speed Estimation: The study may present findings on the accuracy
of vehicle speed estimation achieved through the implemented methodology.
This could involve comparing the calculated speeds with ground truth data or
manual measurements to assess the system's accuracy and reliability.
Impact of Camera Specifications: The study may explore the impact of camera
specifications, such as frame rate, resolution, and focal length, on the accuracy
and efficiency of speed detection. Findings could provide insights into optimal
camera settings for effective speed estimation.
Validation and Field Testing: Conduct extensive validation and field testing of
vehicle speed detection systems in real-world environments to assess their
performance, reliability, and practical usability. Field studies could provide
valuable insights into system effectiveness and identify areas for improvement.