Consumer2
Consumer2
Consumer2
This paper proposes a novel medical image fusion algorithm that enhances image
quality and preserves input details. Using Gaussian and rolling guidance filters,
images are decomposed into five components: two high-detail (HDCs), two low-
detail (LDCs), and a base component (BC). A modified VGG19 (MVGG-19)
network, adapted via transfer learning, is employed for classifying input modalities
(MRI, CT, PET, SPECT) and establishing fusion rules for HDCs and LDCs. For
BCs, a coupled neural P system (CNPS) defines fusion rules. Experimental results
demonstrate superior image quality and effective information retention compared
to seven advanced algorithms.
1.Introduction
2.Background
- Gradient Features (1st order directional derivative) is able to accurately reflect the
important perceptual textures, edges, and geometrical structures of organs, tissues,
and fibers.
- The structure tensor is an effective tool for analyzing local gradient features.
- Using for image segmentation, image denoising, image fusion in medical imaging
- Advantage of CNPS: Handling large datasets, flexibility to adapt and extend the
system, handling high-resolution medical images, integration of neural processing
enables better detection and segmentation outcomes
- Combines Gaussian filter (GF) and rolling guidance filter (RGF) to decompose
an image into five components: one base, two highly detailed, and two low-
detailed components.
- Preserves edges and fine details while reducing noise, suitable for tasks like
medical image fusion, object recognition, and denoising and offers computational
efficiency and adaptability for real-time applications.
3.2 A feature enrichment method based on the LE function and STS operator
- MVGG-19 Network Features: The features extracted from the MVGG-19 network (pool1 to
pool5) are of lower quality and appear indistinct or blurred, making them challenging to analyze
effectively using the STS operator.
- STS Operator Limitation: The STS operator is ineffective at detecting faint or small features
in the MVGG-19 extracted features.
- Proposed Solution: A new feature enrichment method is introduced by combining the local
energy function with the STS operator to improve feature detection.
- Local Energy Function: The local energy (LE) is calculated using a sliding window over the
high-frequency components of the image, summing the squares of coefficient values within the
window.
as the Hadamard product, denoted as LE_STS = S(I) ∘ LE(I), where S is the STS operator.
- Mathematical Integration: The integration of local energy with the STS operator is expressed
- Improved Detection: The LE_STS method significantly enhances the detection of weak and
minute features, as seen in the comparison of STS(Wi) and LE_STS(Wi) feature images.
This method helps enhance medical image synthesis by improving the representation of high-
frequency features.
The proposed method, FR_CNPS, uses CNPSs to fuse base components while
maintaining brightness and contrast.
CNPS1 and CNPS2 are two CNPSs with local topology that take the base components
of input images (I1, I2) as external inputs.
Excitation Matrices (C1 and C2):
o C1 and C2 represent the number of times neurons fire in CNPS1 and CNPS2.
Fusion Rule for Base Components: --- in paper ---
3.5 The proposed algorithms
Input Transformation: The input color image (IP) is converted into the YUV color model.
Decomposition (GF-RGR Algorithm): The image (IM) and the Y-channel are decomposed
into:
The HDCs and LDCs are fused using the FR_MVGG_19 method.
The BCs are fused using the FR_CNPS method.
Combination: The fused components (HDCs, LDCs, BCs) are combined to generate the
fused image, FGray.
Final Transformation: The FGray image is transformed back into the RGB color space
using the FGray, U, and V channels.
4.2.1 Datasets
- 64 pairs of MRI and SPECT images from Groups 1, 2, and 3 in Table 1 used
to evaluate the proposed image synthesis method.
- Group 4 contains 1424 images from four different types of imaging techniques
(MRI, CT, PET, and SPECT), which were utilized to construct the MVGG-19
model.
Visual Evaluation:
The proposed method produces superior image quality in terms of brightness, contrast,
and sharpness compared to other fusion algorithms.
Small frame extractions show that the proposed method generates sharper images,
preserving both soft tissue (functional) and structural (skull) information.
Color preservation is superior in the proposed method, whereas other fusion methods
result in color degradation or distortion.
Quantitative Evaluation:
Conclusion:
The proposed method excels in both image quality and information preservation,
outperforming other image fusion methods across all evaluation metrics.
Computation Cost Overview: The computation cost is used to evaluate algorithm efficiency.
Algorithm A1:
Algorithm A3:
Proposed Method:
Execution times: 2.0538s (Group 1), 2.1575s (Group 2), 2.1221s (Group 3).
Ranked 4th in computational cost compared to seven other algorithms.
5 Conclusion
Study Focus:
Proposed Algorithms:
Algorithm 1 (Image Decomposition): Divides the input image into five components:
two high-detail components (HDCs), two low-detail components (LDCs), and a base
component (BC). This decomposition enhances synthesis efficiency.
Algorithm 2 (Fusion Method for HDCs and LDCs): Based on the FR_MVGG_19
method, this algorithm preserves fine details. A feature enrichment technique, combining
local energy function and STS operator, is introduced to improve this method’s
efficiency.
Algorithm 3 (Fusion Rule for BC): Based on CNPS, this fusion rule prevents image
quality degradation during synthesis.
Experimental Evaluation:
Execution Time:
The proposed method had a mean execution time (MET) of around 2.05-2.16 seconds
across different datasets, showing efficiency in comparison to other methods.