Skip to main content

Advertisement

Log in

Light field depth estimation using occlusion-aware consistency analysis

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Occlusion modeling is critical for light field depth estimation, since occlusion destroys the photo-consistency assumption, which most depth estimation methods hold. Previous works always detect the occlusion points on the basis of Canny detector, which can leave some occlusion points out. Occlusion handling, especially for multi-occluder occlusion, is still challenging. In this paper, we propose a novel occlusion-aware depth estimation method, which can better solve the occlusion problem. We design two novel consistency costs based on the photo-consistency for depth estimation. According to the consistency costs, we analyze the influence of the occlusion and propose an occlusion detection technique based on depth consistency, which can detect the occlusion points more accurately. For the occlusion point, we adopt a new data cost to select the un-occluded views, which are used to determine the depth. Experimental results demonstrate that the proposed method is superior to the other compared algorithms, especially in multi-occluder occlusions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
€32.70 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets analyzed during the current study are available in the repository. https://github.com/chshin10/resources.

References

  1. Chen, Y., Zhang, S., Chang, S., Lin, Y.: Light field reconstruction using efficient pseudo 4D epipolar-aware structure. IEEE Trans. Comput. Imaging 8, 397–410 (2022)

    Article  Google Scholar 

  2. Guo, Z., Wu, J., Chen, X., Ma, S., Zhu, L., Yang, P., Xu, B.: Accurate light field depth estimation using multi-orientation partial angular coherence. IEEE Access 7, 169,123-169,132 (2019)

    Article  Google Scholar 

  3. Han, K., Xiang, W., Wang, E., Huang, T.: A novel occlusion-aware vote cost for light field depth estimation. IEEE Trans. Pattern Anal. Mach. Intell. pp. 1–1 (2021)

  4. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Asian Conference on Computer Vision (2016)

  5. Huang, C.T.: Empirical Bayesian light-field stereo matching by robust pseudo random field modeling. IEEE Trans. Pattern Anal. Mach. Intell. 41(3), 552–565 (2019)

    Article  Google Scholar 

  6. Huang, Z., Hu, X., Xue, Z., Xu, W., Yue, T.: Fast light-field disparity estimation with multi-disparity-scale cost aggregation. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV, pp. 6300–6309 (2021)

  7. Jeon, H.G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.W., Kweon, I.S.: Accurate depth map estimation from a lenslet light field camera. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1547–1555 (2015)

  8. Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 1 (2013)

    MATH  Google Scholar 

  9. Kolmogorov, V., Zabih, R.: What energy functions can be minimizedvia graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)

    Article  Google Scholar 

  10. Mousnier, A., Vural, E., Guillemot, C.: Partial light field tomographic reconstruction from a fixed-camera focal stack. Comput. Sci. (2015)

  11. Raj, A.S., Lowney, M., Shah, R., Wetzstein, G.: Stanford light field archives. http://lightfields.stanford.edu/ (2016)

  12. Ren, N., Levoy, M., Bredif, M., Duval, G., Hanrahan, P.: Light Field Photography with a Hand-Held Plenoptic Camera. Stanford University Cstr (2005)

  13. Sheng, H., Zhang, S., Cao, X., Fang, Y., Xiong, Z.: Geometric occlusion analysis in depth estimation using integral guided filter for light-field image. IEEE Trans. Image Process. 26(12), 5758–5771 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Shi, J., Jiang, X., Guillemot, C.: A framework for learning depth from a flexible subset of dense and sparse light field views. IEEE Trans. Image Process. 28(12), 5867–5880 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Shin, C., Jeon, H.G., Yoon, Y., Kweon, I.S., Kim, S.J.: Epinet: a fully-convolutional neural network using epipolar geometry for depth from light field images. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4748–4757 (2018)

  16. Tao, M.W., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: 2013 IEEE International Conference on Computer Vision, pp. 673–680 (2013)

  17. Tran, T.H., Mammadov, G., Simon, S.: GVLD: a fast and accurate GPU-based variational light-field disparity estimation approach. IEEE Trans. Circuits Syst. Video Technol. 31(7), 2562–2574 (2021)

    Article  Google Scholar 

  18. Tsai, Y.J., Liu, Y.L., Ouhyoung, M., Chuang, Y.Y.: Attention-based view selection networks for light-field disparity estimation. In: Proceedings of the AAAI Conference on Artificial Intelligence 34, 12095–12103 (2020)

  19. Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016)

    Article  Google Scholar 

  20. Wang, Y., Liu, F., Zhang, K., Wang, Z., Sun, Z., Tan, T.: High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN. IEEE Trans. Comput. Imaging 6, 830–842 (2020)

    Article  Google Scholar 

  21. Wang, Y., Wang, L., Liang, Z., Yang, J., An, W., Guo, Y.: Occlusion-aware cost constructor for light field depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19,809–19,818 (2022)

  22. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)

    Article  Google Scholar 

  23. Wanner, S., Meister, S., Goldluecke, B.: Datasets and benchmarks for densely sampled 4D light fields. In: Vision, Modeling and Visualization, pp. 225–226 (2013)

  24. Williem, Park, I.K., Lee, K.M.: Robust light field depth estimation using occlusion-noise aware data costs. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2484–2497 (2018)

    Article  Google Scholar 

  25. Yu, J.: A light-field journey to virtual reality. IEEE Multimed. 24(2), 104–112 (2017)

    Article  MathSciNet  Google Scholar 

  26. Zhang, S., Sheng, H., Li, C., Zhang, J., Xiong, Z.: Robust depth estimation for light field via spinning parallelogram operator. Comput. Vis. Image Underst. 145, 148–159 (2016)

    Article  Google Scholar 

  27. Zhang, Y., Dai, W., Xu, M., Zou, J., Zhang, X., Xiong, H.: Depth estimation from light field using graph-based structure-aware analysis. IEEE Trans. Circuits Syst. Video Technol. 30(11), 4269–4283 (2020)

    Article  Google Scholar 

  28. Zhang, Y., Lv, H., Liu, Y., Wang, H., Wang, X., Huang, Q., Xiang, X., Dai, Q.: Light-field depth estimation via epipolar plane image analysis and locally linear embedding. IEEE Trans. Circuits Syst. Video Technol. 27(4), 739–747 (2017)

    Article  Google Scholar 

  29. Zhu, H., Wang, Q., Yu, J.: Occlusion-model guided antiocclusion depth estimation in light field. IEEE J. Sel. Top. Signal Process. 11(7), 965–978 (2017)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Key Research and Development Project Grant, Grant/Award Number: 2018AAA0100802, Opening Foundation of National Engineering Laboratory for Intelligent Video Analysis and Application.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuqing Duan.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Chao, W., Wang, L. et al. Light field depth estimation using occlusion-aware consistency analysis. Vis Comput 39, 3441–3454 (2023). https://doi.org/10.1007/s00371-023-03027-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03027-1

Keywords

Navigation