Multifocus Image Fusion Using Convolutional Neural Networks in the Discrete Wavelet Transform Domain

Abstract

In this paper, a novel multifocus image fusion algorithm based on the convolutional neural network (CNN) in the discrete wavelet transform (DWT) domain is proposed. The algorithm combines the advantages of spatial domain- and transform domain-based methods. The CNN is used to amplify features and generate different decision maps for different frequency subbands instead of image blocks or source images. In addition, the CNN, which can be seen as an adaptive fusion rule, replaces the traditional fusion rules. The proposed algorithm includes the following steps: first, we decompose each source image into one low frequency subband and several high frequency subbands using the DWT; second, these frequency subbands are used as input to the CNN to generate weight maps. To obtain a more accurate decision map, it is refined by a series of postprocessing operations, including the sum-modified-Laplacian (SML) and guided filter (GF). According to their decision maps, the frequency subbands are fused; finally, the fused image can be obtained using the inverse DWT. The experimental results show that our algorithm is superior to other algorithms.

Publication
Multimedia Tools and Applications