*Result*: Multimodal Medical Image Fusion based on Variation Model Decomposition and Convolutional Neural Networks.
*Further Information*
*Medical images can be categorized into anatomical and functional modalities on the basis of the type of information they convey. Anatomical image modalities deliver precise delineation of morphological structures with superior spatial resolution, but they cannot improve functional and metabolic information. Functional images typically have low resolution, which makes accurate identification of anatomical structures and determination of precise locations challenging. However, they can reflect the functional and metabolic information of organs, and cannot be replaced by anatomical images. Several algorithms that address the problem of detail loss in medical image fusion using multiple modalities have been developed. However, many of the current methods tend to lose many details when decomposing, transforming, and reconstructing images. This study presents a convolutional neural network and variation model-based multimodal medical image fusion approach to overcome this shortcoming. The proposed method begins by decomposing multimodal images into their individual cartoons and textures using a variation model. Subsequently, perceptual images are extracted, which record fine details, using convolutional neural networks. In the last step, the cartoon, texture, and perceptual images are fused using three separate methods. The suggested method outperforms reference techniques, according to the experimental results, especially in terms of color fidelity and fusion quality. [ABSTRACT FROM AUTHOR]
Copyright of International Journal of Pattern Recognition & Artificial Intelligence is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)*