Abstract
Infrared and visible image fusion aims to highlight the prominent infrared target while containing the valuable texture details as much as possible. However, visible images are susceptible to the environment, especially the low-illumination environment, which will seriously affect the quality of the fused image. To solve this problem, an adaptive enhanced infrared and visible image fusion algorithm based on hybrid \(\ell_{1} { - }\ell_{{0}}\) layer decomposition model and coupled dictionary is proposed (we termed the proposed method as AEFusion). First, the visible image is adaptively enhanced according to the actual situation. Then a novel fusion scheme based on coupled dictionary and \(\ell_{{1}} { - }\ell_{{0}}\) pyramid is proposed to obtain the pre-fusion image, to further highlight the significant information, we set the pre-fusion image as the benchmark to obtain the weight map which is used to fuse the final fused detail layer. Qualitative and quantitative experimental results demonstrate that the proposed method is superior to 11 state-of-the-art image fusion methods as more valuable texture information and prominent infrared targets are preserved by the AEFusion, which is beneficial to target detection and tracking tasks. Our code is publicly available at: https://github.com/VCMHE/IRfusion.





















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Sun H, Liu Q, Wang J et al (2021) Fusion of infrared and visible images for remote detection of low-altitude slow-speed small targets. IEEE J Sel Top Appl Earth Obs Remote Sens 14:2971–2983. https://doi.org/10.1109/JSTARS.2021.3061496
Uzair M, Brinkworth RSA, Finn A (2021) A bio-inspired spatiotemporal contrast operator for small and low-heat-signature target detection in infrared imagery. Neural Comput Appl 33:7311–7324. https://doi.org/10.1007/s00521-020-05206-w
Li C, Liang X, Lu Y et al (2019) RGB-T object tracking: Benchmark and baseline. Pattern Recogn 96:106977
Ciprián-Sánchez JF, Ochoa-Ruiz G, Gonzalez-Mendoza M, Rossi L (2021) FIRe-GAN: A novel deep learning-based infrared-visible fusion method for wildfire imagery. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06691-3
Chen J, Li X, Luo L et al (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78. https://doi.org/10.1016/j.ins.2019.08.066
Zhang X, Ye P, Xiao G (2020) VIFB: A Visible and Infrared Image Fusion Benchmark. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14-19, 2020. Computer Vision Foundation / IEEE, pp 468–478
Yin W, He K, Xu D et al (2022) Significant target analysis and detail preserving based infrared and visible image fusion. Infrared Phys Technol 121:104041
Luo Y, He K, Xu D et al (2022) Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik 258:168914
Morris NJW, Avidan S, Matusik W, Pfister H (2007) Statistics of Infrared Images. In: 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, Minnesota, USA. IEEE Computer Society
Luo Y, He K, Xu D, Yin W (2022) Infrared and visible image fusion based on visibility enhancement and norm optimization low-rank representation. J Electron Imaging 31:013032
Nandhakumar N, Aggarwal JK (1988) Integrated analysis of thermal and visual images for scene interpretation. IEEE Trans Pattern Anal Mach Intell 10:469–481. https://doi.org/10.1109/34.3911
Vanmali AV, Gadre VM (2017) Visible and NIR image fusion using weight-map-guided laplacian-gaussian pyramid for improving scene visibility. Sādhanā 42:1063–1082
Zhan L, Zhuang Y, Huang L (2017) Infrared and visible images fusion method based on discrete wavelet transform. J Comput 28:57–71
Zhao C, Guo Y, Wang Y (2015) A fast fusion scheme for infrared and visible light images in NSCT domain. Infrared Phys Technol 72:266–275
Zhang B, Lu X, Pei H, Zhao Y (2015) A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. Infrared Phys Technol 73:286–297. https://doi.org/10.1016/j.infrared.2015.10.004
Li H, Qi X, Xie W (2020) Fast infrared and visible image fusion with structural decomposition. Knowl-Based Syst 204:106182
Bavirisetti DP, Xiao G, Zhao J et al (2019) Multi-scale guided image and video fusion: A fast and efficient approach. Circuits Syst Signal Process 38:5576–5605
Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: A survey. Inf Fusion 45:153–178. https://doi.org/10.1016/j.inffus.2018.02.004
Zhang Q, Fu Y, Li H, Zou J (2013) Dictionary learning method for joint sparse representation-based image fusion. Opt Eng 52:057006
Liu Y, Chen X, Ward RK, Wang ZJ (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23:1882–1886
Li H, Wu X-J (2018) Infrared and visible image fusion using latent low-rank representation. arXiv preprint arXiv:180408992
Li G, Lin Y, Qu X (2021) An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Inf Fusion 71:109–129
Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets Multiresolut Inf Process 16:1850018
Li H, Wu X-J, Durrani T (2020) NestFuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas 69:9645–9656
Li H, Wu X-J, Kittler J (2021) RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf Fusion 73:72–86. https://doi.org/10.1016/j.inffus.2021.02.023
Raza A, Liu J, Liu Y et al (2021) IR-MSDNet: Infrared and visible image fusion based on infrared features and multiscale dense network. IEEE J Sel Top Appl Earth Obs Remote Sens 14:3426–3437. https://doi.org/10.1109/JSTARS.2021.3065121
Yang Z, Chen Y, Le Z, Ma Y (2021) GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks. Neural Comput Appl 33:6133–6145. https://doi.org/10.1007/s00521-020-05387-4
Liang Z, Xu J, Zhang D, et al (2018) A hybrid l1-l0 layer decomposition model for tone mapping. In: Proceedings of the IEEE conference on computer vision and pattern recognition: 4758–4766
Veshki FG, Ouzir N, Vorobyov SA (2020) Image fusion using joint sparse representations and coupled dictionary learning. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE: 8344–8348
Veshki FG, Vorobyov SA (2019) An efficient coupled dictionary learning method. IEEE Signal Process Lett 26:1441–1445. https://doi.org/10.1109/LSP.2019.2934045
Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53:4655–4666. https://doi.org/10.1109/TIT.2007.909108
Qian S, Shi Y, Wu H et al (2022) An adaptive enhancement algorithm based on visual saliency for low illumination images. Appl Intell 52:1770–1792. https://doi.org/10.1007/s10489-021-02466-4
Fu X, Zeng D, Huang Y, et al (2016) A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016. IEEE Computer Society, pp 2782–2790
Kimmel R, Elad M, Shaked D et al (2003) A variational framework for retinex. Int J Comput Vis 52:7–23. https://doi.org/10.1023/A:1022314423998
Ng MK, Wang W (2011) A total variation model for retinex. SIAM J Imaging Sci 4:345–365. https://doi.org/10.1137/100806588
Rajalingam B, Priya R (2018) Hybrid multimodality medical image fusion technique for feature enhancement in medical diagnosis. Int J Eng Sci Invent 2:52–60
https://figshare.com/articles/TNO Image Fusion Dataset/1008029.
Li H, Wu X-J, Kittler J (2020) MDLatLRR: A novel decomposition method for infrared and visible image fusion. IEEE Trans Image Process 29:4733–4746. https://doi.org/10.1109/TIP.2020.2975984
Piella G (2003) A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion 4:259–280. https://doi.org/10.1016/S1566-2535(03)00046-0
Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17
Zhao Z, Xu S, Zhang C et al (2020) Bayesian fusion for infrared and visible images. Signal Process 177:107734. https://doi.org/10.1016/j.sigpro.2020.107734
Bavirisetti DP, Xiao G, Liu G (2017) Multi-sensor image fusion based on fourth order partial differential equations. In: 20th International Conference on Information Fusion, FUSION 2017, Xi’an, China, July 10–13, 2017. IEEE, pp 1–9
Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100–109. https://doi.org/10.1016/j.inffus.2016.02.001
Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2:023522
Rao Y-J (1997) In-fibre bragg grating sensors. Meas Sci Technol 8:355
Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14:127–135. https://doi.org/10.1016/j.inffus.2011.08.002
Bulanon D, Burks T, Alchanatis V (2009) Image fusion of visible and thermal images for fruit detection. Biosys Eng 103:12–22
Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164. https://doi.org/10.1016/j.inffus.2014.09.004
Viola PA, Jones MJ (2001) Rapid Object Detection using a Boosted Cascade of Simple Features. In: 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM, 8-14 December 2001, Kauai, HI, USA. IEEE Computer Society, pp 511–518
Zafari A, Larsson E, Tillenius M (2019) DuctTeip: An efficient programming model for distributed task-based parallel computing. Parallel Comput. https://doi.org/10.1016/j.parco.2019.102582
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant 62162068, Grant 61761049, in part by the Yunnan Province Ten Thousand Talents Program and Yunling Scholars Special Project under Grant YNWR-YLXZ-2018-022, in part by the Yunnan Provincial Science and Technology Department-Yunnan University “Double First Class” Construction Joint Fund Project under Grant No. 2019FY003012, in part by the Science Research Fund Project of Yunnan Provincial Department of Education under grant 2021Y027, in part by the Graduate Research and Innovation Foundation of Yunnan University No.2021Y176.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yin, W., He, K., Xu, D. et al. Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary. Neural Comput & Applic 34, 20831–20849 (2022). https://doi.org/10.1007/s00521-022-07559-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07559-w