Skip to main content

Advertisement

Log in

Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Infrared and visible image fusion aims to highlight the prominent infrared target while containing the valuable texture details as much as possible. However, visible images are susceptible to the environment, especially the low-illumination environment, which will seriously affect the quality of the fused image. To solve this problem, an adaptive enhanced infrared and visible image fusion algorithm based on hybrid \(\ell_{1} { - }\ell_{{0}}\) layer decomposition model and coupled dictionary is proposed (we termed the proposed method as AEFusion). First, the visible image is adaptively enhanced according to the actual situation. Then a novel fusion scheme based on coupled dictionary and \(\ell_{{1}} { - }\ell_{{0}}\) pyramid is proposed to obtain the pre-fusion image, to further highlight the significant information, we set the pre-fusion image as the benchmark to obtain the weight map which is used to fuse the final fused detail layer. Qualitative and quantitative experimental results demonstrate that the proposed method is superior to 11 state-of-the-art image fusion methods as more valuable texture information and prominent infrared targets are preserved by the AEFusion, which is beneficial to target detection and tracking tasks. Our code is publicly available at: https://github.com/VCMHE/IRfusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
€32.70 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Sun H, Liu Q, Wang J et al (2021) Fusion of infrared and visible images for remote detection of low-altitude slow-speed small targets. IEEE J Sel Top Appl Earth Obs Remote Sens 14:2971–2983. https://doi.org/10.1109/JSTARS.2021.3061496

    Article  Google Scholar 

  2. Uzair M, Brinkworth RSA, Finn A (2021) A bio-inspired spatiotemporal contrast operator for small and low-heat-signature target detection in infrared imagery. Neural Comput Appl 33:7311–7324. https://doi.org/10.1007/s00521-020-05206-w

    Article  Google Scholar 

  3. Li C, Liang X, Lu Y et al (2019) RGB-T object tracking: Benchmark and baseline. Pattern Recogn 96:106977

    Article  Google Scholar 

  4. Ciprián-Sánchez JF, Ochoa-Ruiz G, Gonzalez-Mendoza M, Rossi L (2021) FIRe-GAN: A novel deep learning-based infrared-visible fusion method for wildfire imagery. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06691-3

    Article  Google Scholar 

  5. Chen J, Li X, Luo L et al (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78. https://doi.org/10.1016/j.ins.2019.08.066

    Article  Google Scholar 

  6. Zhang X, Ye P, Xiao G (2020) VIFB: A Visible and Infrared Image Fusion Benchmark. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020. Computer Vision Foundation / IEEE, pp 468–478

  7. Yin W, He K, Xu D et al (2022) Significant target analysis and detail preserving based infrared and visible image fusion. Infrared Phys Technol 121:104041

    Article  Google Scholar 

  8. Luo Y, He K, Xu D et al (2022) Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik 258:168914

    Article  Google Scholar 

  9. Morris NJW, Avidan S, Matusik W, Pfister H (2007) Statistics of Infrared Images. In: 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, Minnesota, USA. IEEE Computer Society

  10. Luo Y, He K, Xu D, Yin W (2022) Infrared and visible image fusion based on visibility enhancement and norm optimization low-rank representation. J Electron Imaging 31:013032

    Article  Google Scholar 

  11. Nandhakumar N, Aggarwal JK (1988) Integrated analysis of thermal and visual images for scene interpretation. IEEE Trans Pattern Anal Mach Intell 10:469–481. https://doi.org/10.1109/34.3911

    Article  Google Scholar 

  12. Vanmali AV, Gadre VM (2017) Visible and NIR image fusion using weight-map-guided laplacian-gaussian pyramid for improving scene visibility. Sādhanā 42:1063–1082

    Article  Google Scholar 

  13. Zhan L, Zhuang Y, Huang L (2017) Infrared and visible images fusion method based on discrete wavelet transform. J Comput 28:57–71

    Google Scholar 

  14. Zhao C, Guo Y, Wang Y (2015) A fast fusion scheme for infrared and visible light images in NSCT domain. Infrared Phys Technol 72:266–275

    Article  Google Scholar 

  15. Zhang B, Lu X, Pei H, Zhao Y (2015) A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. Infrared Phys Technol 73:286–297. https://doi.org/10.1016/j.infrared.2015.10.004

    Article  Google Scholar 

  16. Li H, Qi X, Xie W (2020) Fast infrared and visible image fusion with structural decomposition. Knowl-Based Syst 204:106182

    Article  Google Scholar 

  17. Bavirisetti DP, Xiao G, Zhao J et al (2019) Multi-scale guided image and video fusion: A fast and efficient approach. Circuits Syst Signal Process 38:5576–5605

    Article  Google Scholar 

  18. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: A survey. Inf Fusion 45:153–178. https://doi.org/10.1016/j.inffus.2018.02.004

    Article  Google Scholar 

  19. Zhang Q, Fu Y, Li H, Zou J (2013) Dictionary learning method for joint sparse representation-based image fusion. Opt Eng 52:057006

    Article  Google Scholar 

  20. Liu Y, Chen X, Ward RK, Wang ZJ (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23:1882–1886

    Article  Google Scholar 

  21. Li H, Wu X-J (2018) Infrared and visible image fusion using latent low-rank representation. arXiv preprint arXiv:180408992

  22. Li G, Lin Y, Qu X (2021) An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Inf Fusion 71:109–129

    Article  Google Scholar 

  23. Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets Multiresolut Inf Process 16:1850018

    Article  MathSciNet  MATH  Google Scholar 

  24. Li H, Wu X-J, Durrani T (2020) NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas 69:9645–9656

    Article  Google Scholar 

  25. Li H, Wu X-J, Kittler J (2021) RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf Fusion 73:72–86. https://doi.org/10.1016/j.inffus.2021.02.023

    Article  Google Scholar 

  26. Raza A, Liu J, Liu Y et al (2021) IR-MSDNet: Infrared and visible image fusion based on infrared features and multiscale dense network. IEEE J Sel Top Appl Earth Obs Remote Sens 14:3426–3437. https://doi.org/10.1109/JSTARS.2021.3065121

    Article  Google Scholar 

  27. Yang Z, Chen Y, Le Z, Ma Y (2021) GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks. Neural Comput Appl 33:6133–6145. https://doi.org/10.1007/s00521-020-05387-4

    Article  Google Scholar 

  28. Liang Z, Xu J, Zhang D, et al (2018) A hybrid l1-l0 layer decomposition model for tone mapping. In: Proceedings of the IEEE conference on computer vision and pattern recognition: 4758–4766

  29. Veshki FG, Ouzir N, Vorobyov SA (2020) Image fusion using joint sparse representations and coupled dictionary learning. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE: 8344–8348

  30. Veshki FG, Vorobyov SA (2019) An efficient coupled dictionary learning method. IEEE Signal Process Lett 26:1441–1445. https://doi.org/10.1109/LSP.2019.2934045

    Article  Google Scholar 

  31. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53:4655–4666. https://doi.org/10.1109/TIT.2007.909108

    Article  MathSciNet  MATH  Google Scholar 

  32. Qian S, Shi Y, Wu H et al (2022) An adaptive enhancement algorithm based on visual saliency for low illumination images. Appl Intell 52:1770–1792. https://doi.org/10.1007/s10489-021-02466-4

    Article  Google Scholar 

  33. Fu X, Zeng D, Huang Y, et al (2016) A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016. IEEE Computer Society, pp 2782–2790

  34. Kimmel R, Elad M, Shaked D et al (2003) A variational framework for retinex. Int J Comput Vis 52:7–23. https://doi.org/10.1023/A:1022314423998

    Article  MATH  Google Scholar 

  35. Ng MK, Wang W (2011) A total variation model for retinex. SIAM J Imaging Sci 4:345–365. https://doi.org/10.1137/100806588

    Article  MathSciNet  MATH  Google Scholar 

  36. Rajalingam B, Priya R (2018) Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis. Int J Eng Sci Invent 2:52–60

    Google Scholar 

  37. https://figshare.com/articles/TNO Image Fusion Dataset/1008029.

  38. Li H, Wu X-J, Kittler J (2020) MDLatLRR: A novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process 29:4733–4746. https://doi.org/10.1109/TIP.2020.2975984

    Article  MATH  Google Scholar 

  39. Piella G (2003) A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion 4:259–280. https://doi.org/10.1016/S1566-2535(03)00046-0

    Article  Google Scholar 

  40. Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17

    Article  Google Scholar 

  41. Zhao Z, Xu S, Zhang C et al (2020) Bayesian fusion for infrared and visible images. Signal Process 177:107734. https://doi.org/10.1016/j.sigpro.2020.107734

    Article  Google Scholar 

  42. Bavirisetti DP, Xiao G, Liu G (2017) Multi-sensor image fusion based on fourth order partial differential equations. In: 20th International Conference on Information Fusion, FUSION 2017, Xi’an, China, July 10–13, 2017. IEEE, pp 1–9

  43. Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109. https://doi.org/10.1016/j.inffus.2016.02.001

    Article  Google Scholar 

  44. Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2:023522

    Article  Google Scholar 

  45. Rao Y-J (1997) In-fibre bragg grating sensors. Meas Sci Technol 8:355

    Article  Google Scholar 

  46. Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14:127–135. https://doi.org/10.1016/j.inffus.2011.08.002

    Article  Google Scholar 

  47. Bulanon D, Burks T, Alchanatis V (2009) Image fusion of visible and thermal images for fruit detection. Biosys Eng 103:12–22

    Article  Google Scholar 

  48. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164. https://doi.org/10.1016/j.inffus.2014.09.004

    Article  Google Scholar 

  49. Viola PA, Jones MJ (2001) Rapid Object Detection using a Boosted Cascade of Simple Features. In: 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM, 8-14 December 2001, Kauai, HI, USA. IEEE Computer Society, pp 511–518

  50. Zafari A, Larsson E, Tillenius M (2019) DuctTeip: An efficient programming model for distributed task-based parallel computing. Parallel Comput. https://doi.org/10.1016/j.parco.2019.102582

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62162068, Grant 61761049, in part by the Yunnan Province Ten Thousand Talents Program and Yunling Scholars Special Project under Grant YNWR-YLXZ-2018-022, in part by the Yunnan Provincial Science and Technology Department-Yunnan University “Double First Class” Construction Joint Fund Project under Grant No. 2019FY003012, in part by the Science Research Fund Project of Yunnan Provincial Department of Education under grant 2021Y027, in part by the Graduate Research and Innovation Foundation of Yunnan University No.2021Y176.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kangjian He.

Ethics declarations

Conflict of interest

The authors declare that there they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, W., He, K., Xu, D. et al. Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary. Neural Comput & Applic 34, 20831–20849 (2022). https://doi.org/10.1007/s00521-022-07559-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07559-w

Keywords

Navigation