Abstract
Deep learning has made tremendous advances in the domains of image segmentation and object classification. However, real-time lane line detection and departure estimates in complex traffic conditions have proven to be hard in autonomous driving research. Traditional lane line detection methods require manual parameter modification, but they have some limitations that are still susceptible to interference from obscuring objects, lighting changes, and pavement deterioration. The development of accurate lane line detection and departure estimate algorithms is still a challenge. This article investigated a convolutional neural network (CNN) for lane line detection and departure estimate in a complicated road environment. CNN includes a weight-sharing function that lowers the training parameters. CNN can learn and extract features frequently in image segmentation, object detection, classification, and other applications. The symmetric kernel convolution of classical CNN is upgraded to the structure of asymmetric kernel convolution (AK-CNN) based on lane line detection and departure estimation features. It reduces the CNN network's computational load and improves the speed of lane line detection and departure estimates. The experiment was carried out on the CULane dataset. The lane line detection results have high accuracy in a complex environment by 80.3%. The detection speed is 84.5 fps, which enables real-time lane line detection.













Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
References
Tran, N.: Global Status Report on Road Safety, vol. 27, pp. 5–11. World Health Organization, Geneva (2018)
Jeppsson, H., Östling, M., Lubbe, N.: Real life safety benefits of increasing brake deceleration in car-to-pedestrian accidents: simulation of vacuum emergency braking. Accid. Anal. Prev. 111, 311–320 (2018). https://doi.org/10.1016/j.aap.2017.12.001
NCSA.: NCSA Data Resource Website, Fatality Analysis Reporting System (FARS) Encyclopaedia, p. 20. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation. National Center for Statistics and Analysis (NCSA) Motor Vehicle Traffic Crash Data. US Department of Transportation (2018). Available: http://www-fars.nhtsa.dot.gov/main/index.aspx
Cui, G., Wang, J., Li, J.: Robust multilane detection and tracking in urban scenarios based on LIDAR and mono-vision. IET Image Process. 8(5), 269–279 (2014). https://doi.org/10.1049/iet-ipr.2013.0371
Li, H.T., Todd, Z., Bielski, N., Carroll, F.: 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation. Vis. Comput. 1–16 (2021)
He, Z., Li, Q., Feng, H., Xu, Z.: Fast and sub-pixel precision target tracking algorithm for intelligent dual-resolution camera. Vis. Comput. 36(6), 1157–1171 (2020)
Gao, Q., Feng, Y., Wang, L.: A real-time lane detection and tracking algorithm. In: IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 1230–1234 (2017)
Zhu, J., Shi, F., Li, J.: Advanced driver assistance system based on machine vision. In: IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), vol. 4, pp. 2026–2030 (2021)
An, F.-P., Liu, J., Bai, L.: Object recognition algorithm based on optimized nonlinear activation function-global convolutional neural network. Vis. Comput. 1–13 (2021)
Haris, M., Hou, J., Wang, X.: Multi-scale spatial convolution algorithm for lane line detection and lane offset estimation in complex road conditions. Signal Process. Image Commun. 116413 (2021)
Singh, K., Seth, A., Sandhu, H.S., Samdani, K.: A comprehensive review of convolutional neural network based image enhancement techniques. In: IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), pp. 1–6 (2019)
Li, X., He, M., Li, H., Shen, H.: A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. (2021)
Haris, M., Hou, J.: Obstacle detection and safely navigate the autonomous vehicle from unexpected obstacles on the driving lane. Sensors (Switzerland) 20(17), 1–22 (2020). https://doi.org/10.3390/s20174719
Guotian, F.A.N., Bo, L.I., Qin, H.A.N., Rihua, J., Gang, Q.U.: Robust lane detection and tracking based on machine vision. ZTE Commun. 18(4), 69–77 (2021)
Zhao, K., Meuter, M., Nunn, C., Müller, D., Müller-Schneiders, S., Pauli, J.: A novel multi-lane detection and tracking system. In: IEEE Intelligent Vehicles Symposium, pp. 1084–1089 (2012)
Li, Y., Huang, H., Li, X., Chen, L.: Nighttime lane markings detection based on Canny operator and Hough transform. Sci. Technol. Eng 16, 1671–1815 (2016)
Zhaowei, Y.U., Xiaobo, W.U., Lin, S.: Illumination invariant lane detection algorithm based on dynamic region of interest. Comput. Eng 43(2), 43–56 (2017)
Wang, X., Liu, Y., Hai, D.: Lane detection method based on double ROI and varied-line-spacing-scanning. J. Command Control 3(2), 154–159 (2017)
Barsan, I.A., Wang, S., Pokrovsky, A., Urtasun, R.: Learning to localize using a lidar intensity map. arXiv Prepr. arXiv2012.10902 (2020)
Lee, H., Kim, S., Park, S., Jeong, Y., Lee, H., Yi, K.: AVM/LiDAR sensor based lane marking detection method for automated driving on complex urban roads. In: IEEE Intelligent Vehicles Symposium (IV), pp. 1434–1439 (2017)
Kim, J., Kim, J., Jang, G.-J., Lee, M.: Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw. 87, 109–121 (2017)
Gopalan, R., Hong, T., Shneier, M., Chellappa, R.: A learning approach towards detection and tracking of lane markings. IEEE Trans. Intell. Transp. Syst. 13(3), 1088–1098 (2012)
Kim, J., Lee, M.: Robust lane detection based on convolutional neural network and random sample consensus. Lecture Notes Computer Science (including Subseries in Lecture Notes Artificial Intelligence, Lecture Notes Bioinformatics), vol. 8834, pp. 454–461 (2014). https://doi.org/10.1007/978-3-319-12637-1_57
Kumawat, A. Panda, S.: A robust edge detection algorithm based on feature-based image registration (FBIR) using improved canny with fuzzy logic (ICWFL). Vis. Comput. 1–22 (2021)
He, B., Ai, R., Yan, Y., Lang, X.: Accurate and robust lane detection based on Dual-View Convolutional Neutral Network. In: IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2016, pp. 1041–1046. IEEE. https://doi.org/10.1109/IVS.2016.7535517
Li, J., Mei, X., Prokhorov, D., Tao, D.: Deep neural network for structural prediction and lane detection in traffic scene. IEEE Trans. Neural Netw. Learn. Syst. 28(3), 690–703 (2016)
Haris, M., Glowacz, A.: Road object detection: a comparative study of deep learning-based algorithms. Electronics 10(16), 1932 (2021). https://doi.org/10.3390/ELECTRONICS10161932
Yang, T., Liang, R., Huang, L.: Vehicle counting method based on attention mechanism SSD and state detection. Vis. Comput. 1–11 (2021)
Choi, J., Chun, D., Kim, H., Lee, H.J.: Gaussian YOLOv3: an accurate and fast object detector using localization uncertainty for autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 502–511 (2019). https://doi.org/10.1109/ICCV.2019.00059
Liu, S., Xiong, M., Zhong, W., Xiong, H.: Towards Industrial Scenario Lane Detection: Vision-Based AGV Navigation Methods. In: 2020 IEEE International Conference on Mechatronics and Automation, ICMA, pp. 1101–1106 (2020). https://doi.org/10.1109/ICMA49215.2020.9233837
Bailo, O., Lee, S., Rameau, F., Yoon, J.S., Kweon, I.S.: Robust road marking detection & recognition using density-based grouping & machine learning techniques. In: Proceedings-2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 760–768 (2017). https://doi.org/10.1109/WACV.2017.90
Gurghian, A., Koduri, T., Bailur, S.V., Carey, K.J., Murali, V.N.: DeepLanes: end-to-end lane position estimation using deep neural networks. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45 (2016). https://doi.org/10.1109/CVPRW.2016.12
Chan, T.H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y.: PCANet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015). https://doi.org/10.1109/TIP.2015.2475625
Guillou, E., Meneveaux, D., Maisel, E., Bouatouch, K.: Using vanishing points for camera calibration and coarse 3D reconstruction from a single image. Vis. Comput. 16(7), 396–410 (2000)
McCall, J.C., Trivedi, M.M.: Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation. IEEE Trans. Intell. Transp. Syst. 7(1), 20–37 (2006). https://doi.org/10.1109/TITS.2006.869595
Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 7276–7283 (2018)
Jia, B., Liu, R., Zhu, M.: Real-time obstacle detection with motion features using monocular vision. Vis. Comput. 31(3), 281–293 (2015)
Hou, Y., Ma, Z., Liu, C., Loy, C.C.: Learning lightweight lane detection CNNS by self attention distillation. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 1013–1021. https://doi.org/10.1109/ICCV.2019.00110
Haris, M., Glowacz, A.: Lane line detection based on object feature distillation. Electronics 10(9), 1102 (2021)
Wang, Z., Ren, W., Qiu, Q.: LaneNet: real-time lane detection networks for autonomous driving. arXiv (2018)
Liang, D., et al.: “LineNet: a zoomable CNN for crowdsourced high definition maps modeling in urban environments. arXiv (2018)
Xiong, Y., et al.: “Upsnet: a unified panoptic segmentation network. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019, pp. 8810–8818. https://doi.org/10.1109/CVPR.2019.00902
Garnett, N., Cohen, R., Pe’Er, T., Lahav, R., Levi, D.: 3D-LaneNet: End-to-end 3D multiple lane detection. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2019, pp. 2921–2930. https://doi.org/10.1109/ICCV.2019.00301
Ding, L., Xu, Z., Zong, J., Xiao, J., Shu, C., Xu, B.: A lane line detection algorithm based on convolutional neural network. Geom. Vis. 1386, 175 (2021)
Ye, Y.Y., Hao, X.L., Chen, H.J.: Lane detection method based on lane structural analysis and CNNs. IET Intel. Transport Syst. 12(6), 513–520 (2018). https://doi.org/10.1049/iet-its.2017.0143
Chen, Z., Shi, J., Li, W.: Learned fast HEVC intra coding. IEEE Trans. Image Process. 29, 5431–5446 (2020)
Srivastava, S., Lumb, M., Singal, R.: Lane detection using median filter, wiener filter and integrated hough transform. J. Autom. Control Eng. 3(3), 258–264 (2015). https://doi.org/10.12720/joace.3.3.258-264
Wen-juan, G.S.Y.Z., Yuan-juan, T.Q.Z.: Combining the hough transform and an improved least squares method for line detection. Comput. Sci. 4(4), 196–200 (2012)
Chen, G.H., Zhou, W., Wang, F.J., Xiao, B.J., Dai, S.F.: Lane detection based on improved canny detector and least square fitting. Adv. Mater. Res. 765–767, 2383–2387 (2013)
Mammar, S., Glaser, S., Netto, M.: Time to line crossing for lane departure avoidance: a theoretical study and an experimental setting. IEEE Trans. Intell. Transp. Syst. 7(2), 226–241 (2006)
Guo, J., Kurup, U., Shah, M.: Is it safe to drive? An overview of factors, metrics, and datasets for driveability assessment in autonomous driving. IEEE Trans. Intell. Transp. Syst. 21(8), 3135–3151 (2019)
Tarel, J.-P., Hautiere, N., Caraffa, L., Cord, A., Halmaoui, H., Gruyer, D.: Vision enhancement in homogeneous and heterogeneous fog. IEEE Intell. Transp. Syst. Mag. 4(2), 6–20 (2012)
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. Arxiv, 2016, [Online]. Available: http://arxiv.org/abs/1603.04467
Chetlur, S., et al.: cuDNN: Efficient primitives for deep learning. arXiv, Oct. 2014, Accessed: Mar. 05, 2021. [Online]. Available: http://arxiv.org/abs/1410.0759
Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). https://doi.org/10.1109/TPAMI.2017.2699184
Liu, Y.-B., Zeng, M., Meng, Q.-H.: Heatmap-based vanishing point boosts lane detection. arXiv Prepr. arXiv2007.15602 (2020)
Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pp. 276–291 (2020)
Yoo, S., et al.: End-to-end lane marker detection via row-wise classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 1006–1007 (2020)
Ko, Y., Lee, Y., Azam, S., Munir, F., Jeon, M., Pedrycz, W.: Key points estimation and point instance segmentation approach for lane detection. IEEE Trans. Intell. Transp. Syst. (2021)
Xu, H., Wang, S., Cai, X., Zhang, W., Liang, X., Li, Z.: Curvelane-nas: unifying lane-sensitive architecture search and adaptive point blending. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, pp. 689–704 (2020)
Wang, B., Wang, Z., Zhang, Y.: Polynomial regression network for variable-number lane detection. In: European Conference on Computer Vision, pp. 719–734 (2020)
Zheng, T. et al.: Resa: recurrent feature-shift aggregator for lane detection. arXiv Prepr. arXiv2008.13719 (2020)
Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Keep your eyes on the lane: real-time attention-guided lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 294–302 (2021)
Su, J., Chen, C., Zhang, K., Luo, J., Wei, X., Wei, X.: Structure guided lane detection. arXiv Prepr. arXiv2105.05403 (2021)
Qu, Z., Jin, H., Zhou, Y., Yang, Z., Zhang, W.: Focus on local: detecting lane marker from bottom up via key point. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14122–14130 (2021)
Liu, L., Chen, X., Zhu, S., Tan, P.: CondLaneNet: a top-to-down lane detection framework based on conditional convolution. arXiv Prepr. arXiv2105.05003 (2021)
Acknowledgements
This work was funded by the Key program for Sichuan Science and Technology, China (grant number 2019YFH0097 and 2020YFG0353).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Haris, M., Hou, J. & Wang, X. Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm. Vis Comput 39, 519–538 (2023). https://doi.org/10.1007/s00371-021-02353-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-021-02353-6