Sensors: Detection of Road-Surface Anomalies Using A Smartphone Camera and Accelerometer
Sensors: Detection of Road-Surface Anomalies Using A Smartphone Camera and Accelerometer
Sensors: Detection of Road-Surface Anomalies Using A Smartphone Camera and Accelerometer
Article
Detection of Road-Surface Anomalies Using a Smartphone
Camera and Accelerometer
Taehee Lee , Chanjun Chun and Seung-Ki Ryu *
Future Infrastructure Research Center, Korea Institute of Civil Engineering and Building Technology (KICT),
Goyang 10223, Korea; thlee420@kict.re.kr (T.L.); chanjunchun@kict.re.kr (C.C.)
* Correspondence: skryu@kict.re.kr; Tel.: +82-31-910-0388
Abstract: Road surfaces should be maintained in excellent condition to ensure the safety of motorists.
To this end, there exist various road-surface monitoring systems, each of which is known to have
specific advantages and disadvantages. In this study, a smartphone-based dual-acquisition method
system capable of acquiring images of road-surface anomalies and measuring the acceleration of
the vehicle upon their detection was developed to explore the complementarity benefits of the two
different methods. A road test was conducted in which 1896 road-surface images and corresponding
three-axis acceleration data were acquired. All images were classified based on the presence and
type of anomalies, and histograms of the maximum variations in the acceleration in the gravitational
direction were comparatively analyzed. When the types of anomalies were not considered, it was
difficult to identify their effects using the histograms. The differences among histograms became
evident upon consideration of whether the vehicle wheels passed over the anomalies, and when
excluding longitudinal anomalies that caused minor changes in acceleration. Although the image-
based monitoring system used in this research provided poor performance on its own, the severity of
road-surface anomalies was accurately inferred using the specific range of the maximum variation of
acceleration in the gravitational direction.
at a low cost. However, it cannot measure road-surface damage in areas other than the
vehicle wheel paths and cannot identify the size of the road-surface damage [18].
Laser measurement-based detection methods use special equipment installed on a
separate inspection vehicle, such as a laser scanner, to convert the road surface into a three-
dimensional (3D) object in a coordinate system [19–22]. This approach can directly calculate
various indicators for the precise evaluation of the road-surface condition. However, real-
time processing is difficult at high-driving speeds because of the increased number of
calculations required. Furthermore, considerable expense is incurred by the introduction
of new machinery and its operation.
Image-recognition-based detection methods can analyze road-surface conditions over
a wide area at a reasonable cost. This approach has recently attracted attention owing to the
development of image-recognition technology using deep neural networks (DNNs) [23–28].
Road-surface damage identification using a DNN can capture images of the road surface in
real time as the vehicle travels at a normal driving speed, providing detailed information
such as the size of the damaged regions. However, many image datasets with accurate
labels of the road-surface conditions are required to train a DNN. Additionally, efforts
must be expended to improve the recognition rate by excluding factors that interfere with
analysis, such as changes in the road-surface color and illumination under various types of
weather, shadows on the road, other traveling vehicles, and road signs.
A combination of different measurement techniques could be expected to compen-
sate for the disadvantages of each road-surface anomaly detection method. Smartphones
provide an optimal platform for testing a road-surface anomaly detection method that
employs both a DNN inference model to identify road-surface anomalies and acceleration
data acquisition because they are equipped with built-in processors, LTE communica-
tion modules, three-axis accelerometers, gyroscopes, and cameras. In recent years, both
image-recognition- and vibration-based detection methods have been researched using
smartphones, but few studies have attempted to combine these two different methods. The
development of a smartphone-based road-surface anomaly detection system and associated
mobile application could eventually allow information describing roadway hazards to
be distributed to users in real time, much like traffic congestion information is currently
distributed by navigation applications. Furthermore, a smartphone-based detection system
would considerably expand the monitoring capacity of personnel and agencies devoted to
road maintenance, allowing repairs to be targeted to the areas with the most urgent need.
Finally, the dual-acquisition method demonstrated in this study could be applied in vehicle
black boxes to realize widespread implementation.
In this study, the software was accordingly developed to acquire road-surface images
from a smartphone camera for use in an image-recognition-based fully convolutional
neural network (FCN) model developed in a previous study [28] and acquire acceleration
data using the accelerometer built into a smartphone. The developed software was installed
in an android-based smartphone (which is Samsung Galaxy s10), and the possibility of
combining the two different methods to detect road-surface anomalies was examined based
on an analysis of the road-surface images and acceleration measurement results acquired
during driving.
this information on a map within one minute, allowing the road-management entity to
access up-to-date information describing road conditions.
Please note that the data acquisition method proposed in this study does not collect
detailed road-surface anomaly information (such as cracking) at the same level that existing
expensive equipment is able to; it only collects information describing the presence or
absence of road-surface anomalies and the corresponding variation of acceleration. How-
ever, the proposed method uses low-cost, easily deployable technology to enable the rapid
checking of dynamic information describing changes in the road condition a wide spatial
and temporal range by using a plurality of data collection devices.
Figure 1. Overall image acquisition flow and three-axis accelerations with a smartphone.
Figure 3. Loss and accuracy of the FCN inference model according to training epoch.
Figure 4. Typical image and accelerations obtained with a smartphone camera and three-axis
accelerometer: (a) Original image; (b) Predicted anomaly; (c) Obtained accelerations.
As shown in Figure 4a,b, road-surface images were captured with the smartphone
in a rotated position. A fixing device was used to maintain a constant posture of the
smartphone during driving. Figure 5 shows the initial orientation of the smartphone,
which was attached to the windshield of the test vehicle. These tests were conducted
Sensors 2021, 21, 561 6 of 17
using a sport utility vehicle (SUV) that was 1.925 m high, 1.920 m wide, and 5.15 m long,
with a front wheel tread of 1.685 m and 0.215 m wide tires. The smartphone was installed
on the inner surface of the vehicle windshield at a height of approximately 1.65 m from
the ground, and at a distance of 0.35 m toward the passenger seat from the center of
the vehicle. The direction of gravity was defined as the Z-axis of the global coordinate
system, and the X- and Y-axes were defined as the longitudinal and lateral directions of
the vehicle, respectively. The orientation of the smartphone was determined by rotating it
20◦ around the y’-axis so that its camera could visualize the road surface in the X–Z plane.
The smartphone was also rotated 5◦ around the z0 -axis so that the camera could visualize
the centerline of the road in the X–Y plane. The smartphone was installed so that it could
not rotate around the x0 -axis.
where a X , aY , and a Z are the respective acceleration values based on the global coordinate
system, and a x0 , ay0 , and az0 are the acceleration values measured by the smartphone.
In this study, the Euler angles were rotated clockwise along the order of the XYZ
axes. Therefore, the values of a X , aY , and a Z obtained using Equation (1) were substituted
into Equation (2) as a x0 , ay0 , and az0 . Likewise, the values of a X , aY , and a Z obtained with
Equation (2) were substituted into Equation (3) as a x0 , ay0 , and az0 . Finally, the accelerations
in the global coordinate system that reflect the rotation of all coordinates were calculated.
u = ( x − c x )/ f x (4)
v = y − cy / f y (5)
where x and y are the coordinates of point p in the pixel coordinate system (the images
used in this study had a resolution of 1920 × 1080, therefore x ranged from 0 to 1920
and y ranged from 0 to 1080); c x and cy are the coordinates of the principal point of the
camera in the pixel coordinate system (assumed to be equal to 960 and 540, respectively,
denoting the median values of x); and f x and f y denote the focal lengths of the smartphone
camera telephoto lens, equal to 1415.06 and 795.97 pixels, respectively. When the x and y
pixel values obtained from the image are substituted into Equations (4) and (5), the origin
becomes the principal point and the focal length between the image plane and the camera
origin is normalized to unity. The geometric relationship between the normalized pixel
coordinates (u, v) and ground coordinates (X, Y) in Figure 6a,b can be used to obtain X and
Y, which are respectively the longitudinal and lateral distances from the origin (where the
camera is installed) to point P, as shown in Equations (6) and (7):
π
X = h· tan + θ − tan−1 v (6)
2
lOp + l pP
Y = u· (7)
lOp
where lOp is the distance from the principal point to point p in the XZ plane, defined
√
to be equal to 1 + v2 , and l pP is the distance from point p to point P in the XZ plane.
Thus, lOp + l pP is the distance from the principal point to point P, and is defined to be
√
equal to X 2 + h2 . Considering the rotation angle (ψ) of the smartphone in the z’-axis
direction in Figure 6b, Equations (6) and (7) can respectively be expressed in the forms of
Equations (8) and (9): π
X = h· tan + θ − tan−1 v · cos ψ (8)
2
Sensors 2021, 21, 561 8 of 17
lOp + l pP
Y = u· − X · sin ψ (9)
lOp
Figure 6. Geometric forms of the image plane (u, v) and ground plane (X, Y): (a) XZ plane;
(b) XY plane.
Because the front wheel tread of the SUV used in these tests was 1.685 m and the tires
were 0.215 m wide, the wheels were located from −0.95 to −0.735 m and from 0.735 to
0.95 m from the longitudinal axis of the vehicle, as shown in Figure 5. In addition, because
the camera was located 0.35 m from the longitudinal axis toward the passenger seat, the
Y-axis coordinates of the wheels were located from −1.3 to −1.085 m and from 0.385 to
0.6 m in the global coordinate system. In Equations (8) and (9), h, θ, and ψ, which are
related to the orientation of the smartphone camera, and Y, which describes the location
of the wheels, all have known values. Thus, the normalized pixel coordinates (u, v) can
be calculated when a value is entered for X by assuming that the vehicle is traveling in
a straight line. Moreover, the pixel coordinates (x, y) can be calculated using (u, v) and
Equations (4) and (5). Figure 7a shows a road-surface image in which the calculated pixel
coordinate values for the wheel paths on the road surface are indicated in red. The range
of ψ was then adjusted from 2.5◦ to 7.5◦ owing to the curved trajectory of the vehicle
during its motion and the slight rotation of the camera. Based on these values, the areas
corresponding to the wheel paths were determined as shown in Figure 7b.
Sensors 2021, 21, 561 9 of 17
Figure 7. Areas associated with wheel paths on acquired images (red-highlighted areas): (a) ψ = 5◦ ;
(b) ψ = 2.5–7.5◦ .
When road anomalies were detected using the FCN model, the pixel coordinates of
the detected areas were stored. Then, the cases in which the pixel coordinates of the road
anomalies detected by the FCN model overlapped the wheel paths, shown in the image in
Figure 7b, were identified. Among the 893 road-surface images containing road-surface
anomalies, 293 showed that the pixel coordinates of the road anomalies overlapped the
wheel paths.
Figure 8. Histogram for the maximum variation of Z-axis acceleration during different data acquisi-
tion periods.
During the test, the vehicle was driven at an average speed of 54.9 km/h (=15.26 m/s).
Thus, distances equal to 45.78, 15.26, and 7.63 m were covered in the 0–3, 0–1, and 0–0.5 s
Sensors 2021, 21, 561 10 of 17
time ranges, respectively. The y values of the top and bottom of the ROI analyzed by the
FCN model in the pixel coordinate system were 450 and 650, respectively. The X values of
the top and bottom parts of the ROI calculated using Equations (5), (6) and (8) were equal
to 6.38 and 3.09 m, respectively. Therefore, the acceleration signals collected in the 0–3 or
0–1 s ranges included data generated after the vehicle passed the selected ROI. Although
there were no road anomalies in the ROI in 1003 road-surface images, the variation of
acceleration owing to road anomalies after the ROI were included. Therefore, in this study,
only the acceleration data in the range of 0–0.5 s was used for the analysis of the results for
road-surface anomalies.
(c)
Figure 9. Histograms as a function of the maximum variation of Z-axis acceleration with and without detected cracks in
acquired images: (a) Results depending on the presence of road anomalies; (b) Classification based on wheel paths; (c)
Results obtained by excluding longitudinal cracks.
Sensors 2021, 21, 561 11 of 17
Figure 9b shows the results obtained by excluding the road-surface images in which
the pixel coordinates of the road anomalies did not overlap with the wheel paths. When
the wheel path was considered, the average maximum variation of Z-axis acceleration was
2.66 m/s2 and the median was 2.10 m/s2 . Thus, normal conditions were more accurately
distinguished from conditions with road-surface anomalies when accounting for the wheel
paths than when using all images with road-surface anomalies.
Table 1 indicates that images with longitudinal anomalies represented 51% of all
images with road anomalies. Longitudinal anomalies; however, cause smaller variations
in acceleration than lateral or local anomalies, similar to those in cases without road-
surface anomalies. Thus, in Figure 9c, acceleration data associated with longitudinal
anomalies were excluded from the 293 images in which the pixel coordinates of road
anomalies overlapped with the wheel paths. The average variation of Z-axis acceleration
was 3.28 m/s2 and the median acceleration was 2.72 m/s2 . These results indicate notable
differences from the cases without road-surface anomalies. For the three pairs of data
shown in Figure 9, t-tests were performed with a significance level of 0.05. The t-values in
Figure 9a–c were calculated to be 14.46, 10.44, and 10.36, respectively, indicating that there
were statistically significant differences in mean values as the thresholds of 1.961, 1.967,
and 1.976 were exceeded.
Figure 10. Histograms for the ratio of the maximum variation of Y-axis acceleration to that of
Z-axis acceleration.
Figure 11. Relationship between the pixel area of the road anomalies detected by the FCN model
and the maximum variation of Z-axis acceleration.
Sensors 2021, 21, 561 13 of 17
Figure 12. Box plots of the maximum variation of Z-axis acceleration for different road-surface anomalies.
Figure 13. Box plots of the pixels of detected road-surface anomalies for different road-surface anomalies.
Figure 12 shows box plots for the maximum variation of Z-axis acceleration for
different types of road anomalies. In the case of the nothing on the road surface, most of
the maximum variations of the Z-axis accelerations were less than 2 m/s2 . However, when
anomalies were detected on the road surface, the variations of the Z-axis accelerations
were mostly >2 m/s2 . The three box plots on the left of Figure 12 show the results for three
types of local anomalies. In the cases of potholes and repaired road surfaces, which have
irregular shapes, the variations in acceleration were more extensively distributed than in
the other cases. Meanwhile, for manholes, which have a constant geometry (e.g., circular),
the change in value was concentrated within a narrow range. Additionally, box plots
for the four different types of lateral anomalies can be observed on the right in Figure 12.
The variation of Z-axis acceleration for the speed bump was the smallest owing to the
deceleration just before passing, though the values for the remaining three types of lateral
anomalies yielded similar distributions.
Figure 13 shows box plots for the pixels of detected road-surface anomalies. Among
the local anomalies, the pothole and the repaired road surface exhibited the widest distribu-
tion of pixels and the manhole exhibited the narrowest distribution (similar to the variations
of Z-axis acceleration shown in Figure 12). However, the maximum variations of Z-axis
acceleration of the lateral anomalies are similar to those of the repaired shape in Figure 12,
but their pixel distributions are smaller than that of the repaired shape in Figure 13. The
detected pixel values in the case of the lateral anomalies were smaller than those in the
case of the local anomalies given that pixels can be distributed as a narrow continuous line
in the lateral direction, but may still cause a significant acceleration change. Therefore, it is
necessary to accurately estimate the geometry of these anomalies, as different instances
of road-surface damage with the same area may cause different changes in acceleration
depending on the depth of the damage. The FCN model used in this study can provide
approximate information describing the presence of road-surface anomalies and their
locations, but not their depths. Therefore, a more precise model is required to represent
severity based on the detected pixel area of the road-surface anomalies in Figure 11.
Sensors 2021, 21, 561 14 of 17
The acceleration data corresponding to the 140 images defined in Section 4.3 were
classified according to the range of the maximum variation of Z-axis acceleration using
quartiles of the data. The values of the first quartile (Q1 = 1.90 m/s2 ) and third quartile
(Q3 = 4.2 m/s2 ) of the Z-direction maximum acceleration data were obtained and simpli-
fied to near-integer values. When the maximum variation of Z-axis acceleration was less
than 2 m/s2 , as shown in Figure 14a, values similar to the acceleration data in normal
conditions were observed. Notably, cases in which small changes in acceleration appear
to be caused by road anomalies located at the edges of the wheel paths were included in
this case, as shown in the second image of Figure 14a. Figure 14b shows the road-surface
images for which the maximum variation of Z-axis acceleration was between 2 and 4 m/s2 .
Anomalies that may affect the wheel paths can be detected, including a manhole and
local anomalies, such as small potholes. Images in which the road-surface condition was
generally uneven were also included in this case. Figure 14c shows the images for which
the maximum variation of Z-axis acceleration was 4 m/s2 or greater. Anomalies that cause
severe changes in acceleration and may require repairs can be frequently found in the
driving path. In addition, changes in acceleration of 4 m/s2 or greater may occur when
a manhole with a large height difference is passed or when the repaired road surface
is uneven.
The severity of road-surface anomalies can be identified by comparing the images
with the maximum variation of Z-axis acceleration, as shown in Figure 14. Therefore, when
it is difficult to quantitatively identify the severity of a road-surface anomaly by estimating
its area and depth from the road-surface images using the FCN model, it is possible to
do so by converting the acquired three-axis accelerations into accelerations relative to the
gravity axis, and classifying them into ranges at the moment of wheel passage. If an FCN
model that can more accurately distinguish images is prepared and more driving data are
accumulated in various conditions in the future, it will be possible to provide more detailed
identification of road-surface anomalies by combining images and acceleration data.
Figure 14. Typical road-surface images and corresponding variations of Z-axis acceleration for different quartiles of
maximum variations of Z-axis accelerations: (a) Maximum variation of Z-axis acceleration < 2 m/s2 ; (b) Maximum variation
of Z-axis acceleration ≥ 2 m/s2 and < 4 m/s2 ; (c) Maximum variation of Z-axis acceleration ≥ 4 m/s2 .
6. Conclusions
In this study, a system was developed to identify road-surface anomalies in collected
images using an FCN model while simultaneously processing three-axis accelerations
collected during the concurrent period of time. The developed system was installed on
a smartphone that was placed on a vehicle windshield and tested on public roads. The
proposed system, which combined an FCN-based road-surface anomaly detection method
with accelerometer-based data acquisition, allowed the severity of FCN-identified road-
surface anomalies to be determined by classifying the concurrent variations in gravitational-
axis accelerations into certain ranges. Existing systems that use special inspection vehicles
to identify road-surface anomalies are expensive to operate on a large scale; however,
the proposed system incurs significantly lower costs and can be extensively distributed
via widely owned and readily available smartphones. The system demonstrated in this
study is thus promising for the widespread application of automatic road-surface anomaly
detection. However, the present study was unable to determine the effect of vehicle type
and vehicle speed on the clarity of the obtained information, which should be a target of
future research to improve the accuracy of the proposed system.
Author Contributions: Writing—original draft preparation and methodology, T.L. and C.C.; super-
vision and project administration, S.-K.R. All authors have read and agreed to the published version
of the manuscript.
Sensors 2021, 21, 561 16 of 17
Funding: This research was supported by a grant from the Technology Business Innovation Program
(TBIP) funded by the Ministry of Land, Infrastructure and Transport of the Korean government
(No. 20TBIP-C144255-03) [Development of Road Damage Information Technology based on Artificial
Intelligence] and a grant (No. 19TLRP-B148886-02) [Commercial Vehicle based Road and Traffic
Information system] from the Korea Agency for Infrastructure Technology Advancement (KAIA).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Haas, R.; Hudson, W.R.; Zaniewski, J. Modern Pavement Management; Krieger Publishing Company: Melbourne, FL, USA, 1994.
2. Zang, K.; Shen, J.; Huang, H.; Wan, M.; Shi, J. Assessing and mapping of road surface roughness based on GPS and Accelerometer
sensors on bicycle-mounted smartphones. Sensors 2018, 18, 914. [CrossRef] [PubMed]
3. Mubaraki, M. Third-order polynomial equations of municipal urban low-volume pavement for the most common distress types.
Int. J. Pavement Eng. 2014, 15, 303–308. [CrossRef]
4. Silva, L.A.; Sanchez San Blas, H.; Peral García, D.; Sales Mendes, A.; Villarubia González, G. An architectural multi-agent system
for a pavement monitoring system with pothole recognition in UAV images. Sensors 2020, 20, 6205. [CrossRef] [PubMed]
5. De Blasiis, M.R.; Di Benedetto, A.; Fiani, M.; Garozzo, M. Assessing of the road pavement roughness by means of LiDAR
technology. Coatings 2021, 11, 17. [CrossRef]
6. Kim, T.; Ryu, S.K. Pothole DB based on 2D images and video data. J. Emerg. Trends Comput. Inform. Sci. 2014, 5, 527–531.
7. Eriksson, J.; Girod, L.; Hull, B.; Newton, R.; Madden, S.; Balakrishnan, H. The pothole patrol: Using a mobile sensor network
for road surface monitoring. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services,
Breckenridge, CO, USA, 17–20 June 2008; pp. 29–39.
8. Mednis, A.; Strazdins, G.; Zviedris, R.; Kanonirs, G.; Selavo, L. Real time pothole detection using Android smart phones with
accelerometers. In Proceedings of the International Conference on Distributed Computing in Sensor Systems and Workshops,
Barcelona, Spain, 27–29 June 2011; pp. 1–6.
9. Mohan, P.; Padmanabhan, V.N.; Ramjee, R. Nericell: Rich monitoring of road and traffic conditions using mobile smart-
phones. In Proceedings of the 6th ACM Conference on Embedded Network Sensor Systems, Raleigh, NC, USA, 5–7 November
2008; pp. 323–336.
10. Bhatt, U.; Mani, S.; Xi, E.; Kolter, J.Z. Intelligent pothole detection and road condition assessment. In Proceedings of the Bloomberg
Data for Good Exchange Conference, Chicago, IL, USA, 24 September 2017.
11. Nunes, D.E.; Mota, V.F. A participatory sensing framework to classify road surface quality. J. Internet Serv. Appl. 2019, 10, 13.
[CrossRef]
12. Allouch, A.; Koubâa, A.; Abbes, T.; Ammar, A. Roadsense: Smartphone application to estimate road conditions using accelerome-
ter and gyroscope. IEEE Sens. J. 2017, 17, 4231–4238. [CrossRef]
13. Chen, K.; Tan, G.; Lu, M.; Wu, J. CRSM: A practical crowdsourcing-based road surface monitoring system. Wirel. Netw. 2016,
22, 765–779. [CrossRef]
14. Jang, J.; Yang, Y.; Smyth, A.W.; Cavalcanti, D.; Kumar, R. Framework of data acquisition and integration for the detection of
pavement distress via multiple vehicles. J. Comput. Civ. Eng. 2017, 31, 04016052. [CrossRef]
15. Kyriakou, C.; Christodoulou, S.E.; Dimitriou, L. Smartphone-Based pothole detection utilizing artificial neural networks. J.
Infrastruct. Syst. 2019, 25, 04019019. [CrossRef]
16. Singh, G.; Bansal, D.; Sofat, S.; Aggarwal, N. Smart patrolling: An efficient road surface monitoring using smartphone sensors
and crowdsourcing. Pervasive Mob. Comput. 2017, 40, 71–88. [CrossRef]
17. Celaya-Padilla, J.M.; Galván-Tejada, C.E.; López-Monteagudo, F.E.; Alonso-González, O.; Moreno-Báez, A.; Martínez-Torteya, A.;
Galván-Tejada, J.I.; Arceo-Olague, J.G.; Luna-García, H.; Gamboa-Rosales, H. Speed bump detection using Accelerometric
features: A genetic algorithm approach. Sensors 2018, 18, 443. [CrossRef] [PubMed]
18. Li, S.; Yuan, C.; Liu, D.; Cai, H. Integrated processing of image and GPR data for automated pothole detection. J. Comput. Civ.
Eng. 2016, 30, 04016015. [CrossRef]
19. Chang, K.; Chang, J.; Liu, J. Detection of pavement distresses using 3D laser scanning technology. In Proceedings of the
International Conference on Computing in Civil Engineering, Cancun, Mexico, 12–15 July 2005.
20. Li, Q.; Yao, M.; Yao, X.; Xu, B. A real-time 3D scanning system for pavement distortion inspection. Meas. Sci. Technol. 2009,
21, 015702. [CrossRef]
21. Bitelli, G.; Simone, A.; Girardi, F.; Lantieri, C. Laser scanning on road pavements: A new approach for characterizing surface
texture. Sensors 2012, 12, 9110–9128. [CrossRef]
22. Gui, R.; Xu, X.; Zhang, D.; Lin, H.; Pu, F.; He, L.; Cao, M. A component decomposition model for 3D laser scanning pavement
data based on high-pass filtering and sparse analysis. Sensors 2018, 18, 2294. [CrossRef]
Sensors 2021, 21, 561 17 of 17
23. Koch, C.; Brilakis, I. Pothole detection in asphalt pavement images. Adv. Eng. Inform. 2011, 25, 507–515. [CrossRef]
24. Jo, Y.; Ryu, S. Pothole detection system using a black-box camera. Sensors 2015, 15, 29316–29331. [CrossRef]
25. Jog, G.M.; Koch, C.; Golparvar-Fard, M.; Brilakis, I. Pothole properties measurement through visual 2D recognition and 3D
reconstruction. In Proceedings of the ASCE International Conference on Computing in Civil Engineering, Clearwater Beach, FL,
USA, 17–20 June 2012; pp. 553–560.
26. Koch, C.; Georgieva, K.; Kasireddy, V.; Akinci, B.; Fieguth, P. A review on computer vision based defect detection and condition
assessment of concrete and asphalt civil infrastructure. Adv. Eng. Inform. 2015, 29, 196–210. [CrossRef]
27. Mahmoudzadeh, A.; Golroo, A.; Jahanshahi, M.R.; Firoozi Yeganeh, S. Estimating pavement roughness by fusing color and depth
data obtained from an inexpensive RGB-D sensor. Sensors 2019, 19, 1655. [CrossRef]
28. Chun, C.; Ryu, S.K. Road surface damage detection using fully convolutional neural networks and semi-supervised learning.
Sensors 2019, 19, 5501. [CrossRef] [PubMed]
29. Han, W.; Wu, C.; Zhang, X.; Sun, M.; Min, G. Speech enhancement based on improved deep neural networks with MMSE
pretreatment features. In Proceedings of the 13th IEEE International Conference on Signal Processing (ICSP), Chengdu, China,
6–10 November 2016. [CrossRef]
30. Kingma, D.P.; Ba, J.L. ADAM: A method for stochastic optimization. In Proceedings of the 3rd International Conference on
Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–15.