The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,exis...The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,existing image fusion algorithms are generally suitable for normal scenes.In the hazy scene,a lot of texture information in the visible image is hidden,the results of existing methods are filled with infrared information,resulting in the lack of texture details and poor visual effect.To address the aforementioned difficulties,we propose a haze-free infrared and visible fusion method,termed HaIVFusion,which can eliminate the influence of haze and obtain richer texture information in the fused image.Specifically,we first design a scene information restoration network(SIRNet)to mine the masked texture information in visible images.Then,a denoising fusion network(DFNet)is designed to integrate the features extracted from infrared and visible images and remove the influence of residual noise as much as possible.In addition,we use color consistency loss to reduce the color distortion resulting from haze.Furthermore,we publish a dataset of hazy scenes for infrared and visible image fusion to promote research in extreme scenes.Extensive experiments show that HaIVFusion produces fused images with increased texture details and higher contrast in hazy scenes,and achieves better quantitative results,when compared to state-ofthe-art image fusion methods,even combined with state-of-the-art dehazing methods.展开更多
Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images ...Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images acquired by video monitoring systems for the ease of user observation and data processing.Unfortunately,current fusion algorithms produce artefacts and colour distortion since they cannot make use of spectrum properties and are lacking in information complementarity.Therefore,an information complementarity fusion(ICF)model is designed based on physical signals.In order to separate high-frequency noise from important information in distinct frequency layers,the authors first extracted texture-scale and edge-scale layers using a two-scale filter.Second,the difference map between visible and near-infrared was filtered using the extended-DoG filter to produce the initial visible-NIR complementary weight map.Then,to generate a guide map,the near-infrared image with night adjustment was processed as well.The final complementarity weight map was subsequently derived via an arctanI function mapping using the guide map and the initial weight maps.Finally,fusion images were generated with the complementarity weight maps.The experimental results demonstrate that the proposed approach outperforms the state-of-the-art in both avoiding artificial colours as well as effectively utilising information complementarity.展开更多
Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illuminati...Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.展开更多
High-intensive underground mining has caused severe ground fissures,resulting in environmental degradation.Consequently,prompt detection is crucial to mitigate their environmental impact.However,the accurate segmentat...High-intensive underground mining has caused severe ground fissures,resulting in environmental degradation.Consequently,prompt detection is crucial to mitigate their environmental impact.However,the accurate segmentation of fissuresin complex and variable scenes of visible imagery is a challenging issue.Our method,DeepFissureNets-Infrared-Visible(DFN-IV),highlights the potential of incorporating visible images with infrared information for improved ground fissuresegmentation.DFNIV adopts a two-step process.First,a fusion network is trained with the dual adversarial learning strategy fuses infrared and visible imaging,providing an integrated representation of fissuretargets that combines the structural information with the textual details.Second,the fused images are processed by a fine-tunedsegmentation network,which lever-ages knowledge injection to learn the distinctive characteristics of fissuretargets effectively.Furthermore,an infrared-visible ground fissuredataset(IVGF)is built from an aerial investigation of the Daliuta Coal Mine.Extensive experiments reveal that our approach provides superior accuracy over single-modality image strategies employed in fivesegmentation models.Notably,DeeplabV3+tested with DFN-IV improves by 9.7%and 11.13%in pixel accuracy and Intersection over Union(IoU),respectively,compared to solely visible images.Moreover,our method surpasses six state-of-the-art image fusion methods,achieving a 5.28%improvement in pixel accuracy and a 1.57%increase in IoU,respectively,compared to the second-best effective method.In addition,ablation studies further validate the significanceof the dual adversarial learning module and the integrated knowledge injection strategy.By leveraging DFN-IV,we aim to quantify the impacts of mining-induced ground fissures,facilitating the implementation of intelligent safety measures.展开更多
Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability...Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators.展开更多
Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused inform...Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused information in a single image.One of the critical clinical applications of medical image fusion is to fuse anatomical and functional modalities for rapid diagnosis of malignant tissues.This paper proposes a multimodal medical image fusion network(MMIF-Net)based on multiscale hybrid attention.The method first decomposes the original image to obtain the low-rank and significant parts.Then,to utilize the features at different scales,we add amultiscalemechanism that uses three filters of different sizes to extract the features in the encoded network.Also,a hybrid attention module is introduced to obtain more image details.Finally,the fused images are reconstructed by decoding the network.We conducted experiments with clinical images from brain computed tomography/magnetic resonance.The experimental results show that the multimodal medical image fusion network method based on multiscale hybrid attention works better than other advanced fusion methods.展开更多
Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of vis...Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.展开更多
The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively han...The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion.展开更多
Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing method...Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing methods often fail to distinguish salient objects from background regions,leading to detail suppression in salient regions due to global fusion strategies.This study presents a mask-guided latent low-rank representation fusion method to address this issue.First,the GrabCut algorithm is employed to extract a saliency mask,distinguishing salient regions from background regions.Then,latent low-rank representation(LatLRR)is applied to extract deep image features,enhancing key information extraction.In the fusion stage,a weighted fusion strategy strengthens infrared thermal information and visible texture details in salient regions,while an average fusion strategy improves background smoothness and stability.Experimental results on the TNO dataset demonstrate that the proposed method achieves superior performance in SPI,MI,Qabf,PSNR,and EN metrics,effectively preserving salient target details while maintaining balanced background information.Compared to state-of-the-art fusion methods,our approach achieves more stable and visually consistent fusion results.The fusion code is available on GitHub at:https://github.com/joyzhen1/Image(accessed on 15 January 2025).展开更多
In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusi...In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusion method,Principal Component Analysis(PCA)method has the shortcoming of losing small target,this paper presents a new fusion method of infrared polarization images based on combination of Nonsubsampled Shearlet Transformation(NSST)and improved PCA.This method can make full use of the effectiveness to image details expressed by NSST and the characteristics that PCA can highlight the main features of images.The combination of the two methods can integrate the complementary features of themselves to retain features of targets and image details fully.Firstly,intensity and polarization images are decomposed into low frequency and high frequency components with different directions by NSST.Secondly,the low frequency components are fused with improved PCA,while the high frequency components are fused by joint decision making rule with local energy and local variance.Finally,the fused image is reconstructed with the inverse NSST to obtain the final fused image of infrared polarization.The experiment results show that the method proposed has higher advantages than other methods in terms of detail preservation and visual effect.展开更多
Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achi...This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.展开更多
In order to enhance the contrast of the fused image and reduce the loss of fine details in the process of image fusion,a novel fusion algorithm of infrared and visible images is proposed.First of all,regions of intere...In order to enhance the contrast of the fused image and reduce the loss of fine details in the process of image fusion,a novel fusion algorithm of infrared and visible images is proposed.First of all,regions of interest(RoIs)are detected in two original images by using saliency map.Then,nonsubsampled contourlet transform(NSCT)on both the infrared image and the visible image is performed to get a low-frequency sub-band and a certain amount of high-frequency sub-bands.Subsequently,the coefcients of all sub-bands are classified into four categories based on the result of RoI detection:the region of interest in the low-frequency sub-band(LSRoI),the region of interest in the high-frequency sub-band(HSRoI),the region of non-interest in the low-frequency sub-band(LSNRoI)and the region of non-interest in the high-frequency sub-band(HSNRoI).Fusion rules are customized for each kind of coefcients and fused image is achieved by performing the inverse NSCT to the fused coefcients.Experimental results show that the fusion scheme proposed in this paper achieves better efect than the other fusion algorithms both in visual efect and quantitative metrics.展开更多
A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representati...A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.展开更多
In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directl...In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.展开更多
Image fusion can be performed at different levels:signal,pixel,feature and symbol levels.Almost all image fusion algorithms developed to date fall into pixel level.This paper provides an overview of the most widely us...Image fusion can be performed at different levels:signal,pixel,feature and symbol levels.Almost all image fusion algorithms developed to date fall into pixel level.This paper provides an overview of the most widely used pixel-level image fusion algorithms and some comments about their relative strengths and weaknesses.Particular emphasis is placed on multiscale-based methods.Some performance measures practicable for pixel-level image fusion are also discussed.At last,prospects of pixel-level image fusion are made.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimen...Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimensional con-formal radiation therapy, were studied. Each patient underwent twice sequential planning CT scan, i.e., at pre-treatment, and at mid-treatment for field reduction planning. Three treatment plans were established in each patient: treatment plan A was based on the pre-treatment planning CT scans for the first course of treatment, plan B on the mid-treatment planning CT scans for the second course of treatment, and treatment plan F on the fused images for the whole treatment. The irradiation doses received by organs at risk in the whole treatment with treatment A and B plans were estimated by the plus of the parameters in treatment plan A and B, assuming that the parameters involve the different tissues (i.e. V20=AV20+BV20), or the same tissues within an organ (i.e. Dmax=ADmax+BDmax). The assessment parameters in the treatment plan F were calculated on the basis of the DVH of the whole treatment. Then the above assessment results were compared. Results: There were marked differ-ences between the assessment results derived from the plus of assessment parameters in treatment plan A and B, and the ones derived from treatment plan F. Conclusion: When a treatment plan is altered during the course of radiation treatment, image fusion technique should be performed in the establishment of a new one. The estimation of the assessment parameters for the whole treatment with treatment plan A and B by simple plus, is inaccurate.展开更多
High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue ...High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue saturation (IHS) transform of the multi-spectral image first gives the intensity image. Thereafter, the 2D EMD in terms of row-column extension of the 1D EMD model is used to decompose the detailed scale image and coarse scale image from the high-resolution band image and the intensity image. Finally, a fused intensity image is obtained by reconstruction with high frequency of the high-resolution image and low frequency of the intensity image and IHS inverse transform result in the fused image. After presenting the EMD principle, a multi-scale decomposition and reconstruction algorithm of 2D EMD is defined and a fusion technique scheme is advanced based on EMD. Panchromatic band and multi-spectral band 3,2,1 of Quickbird are used to assess the quality of the fusion algorithm. After selecting the appropriate intrinsic mode function (IMF) for the merger on the basis of EMD analysis on specific row (column) pixel gray value series, the fusion scheme gives a fused image, which is compared with generally used fusion algorithms (wavelet, IHS, Brovey). The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. To assess quality of an image after fusion, information entropy and standard deviation are applied to assess spatial details of the fused images and correlation coefficient, bias index and warping degree for measuring distortion between the original image and fused image in terms of spectral information. For the proposed fusion algorithm, better results are obtained when EMD algorithm is used to perform the fusion experience.展开更多
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeN...In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.展开更多
基金supported by the Natural Science Foundation of Shandong Province,China(ZR2022MF237)the National Natural Science Foundation of China Youth Fund(62406155)the Major Innovation Project(2023JBZ02)of Qilu University of Technology(Shandong Academy of Sciences).
文摘The purpose of infrared and visible image fusion is to create a single image containing the texture details and significant object information of the source images,particularly in challenging environments.However,existing image fusion algorithms are generally suitable for normal scenes.In the hazy scene,a lot of texture information in the visible image is hidden,the results of existing methods are filled with infrared information,resulting in the lack of texture details and poor visual effect.To address the aforementioned difficulties,we propose a haze-free infrared and visible fusion method,termed HaIVFusion,which can eliminate the influence of haze and obtain richer texture information in the fused image.Specifically,we first design a scene information restoration network(SIRNet)to mine the masked texture information in visible images.Then,a denoising fusion network(DFNet)is designed to integrate the features extracted from infrared and visible images and remove the influence of residual noise as much as possible.In addition,we use color consistency loss to reduce the color distortion resulting from haze.Furthermore,we publish a dataset of hazy scenes for infrared and visible image fusion to promote research in extreme scenes.Extensive experiments show that HaIVFusion produces fused images with increased texture details and higher contrast in hazy scenes,and achieves better quantitative results,when compared to state-ofthe-art image fusion methods,even combined with state-of-the-art dehazing methods.
基金supports in part by the Natural Science Foundation of China(NSFC)under contract No.62171253the Young Elite Scientists Sponsorship Program by CAST under program No.2022QNRC001,as well as the Fundamental Research Funds for the Central Universities.
文摘Images with complementary spectral information can be recorded using image sensors that can identify visible and near-infrared spectrum.The fusion of visible and nearinfrared(NIR)aims to enhance the quality of images acquired by video monitoring systems for the ease of user observation and data processing.Unfortunately,current fusion algorithms produce artefacts and colour distortion since they cannot make use of spectrum properties and are lacking in information complementarity.Therefore,an information complementarity fusion(ICF)model is designed based on physical signals.In order to separate high-frequency noise from important information in distinct frequency layers,the authors first extracted texture-scale and edge-scale layers using a two-scale filter.Second,the difference map between visible and near-infrared was filtered using the extended-DoG filter to produce the initial visible-NIR complementary weight map.Then,to generate a guide map,the near-infrared image with night adjustment was processed as well.The final complementarity weight map was subsequently derived via an arctanI function mapping using the guide map and the initial weight maps.Finally,fusion images were generated with the complementarity weight maps.The experimental results demonstrate that the proposed approach outperforms the state-of-the-art in both avoiding artificial colours as well as effectively utilising information complementarity.
基金This researchwas Sponsored by Xinjiang Uygur Autonomous Region Tianshan Talent Programme Project(2023TCLJ02)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01C349).
文摘Infrared and visible light image fusion technology integrates feature information from two different modalities into a fused image to obtain more comprehensive information.However,in low-light scenarios,the illumination degradation of visible light images makes it difficult for existing fusion methods to extract texture detail information from the scene.At this time,relying solely on the target saliency information provided by infrared images is far from sufficient.To address this challenge,this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement,named LLE-Fuse.The method is based on the improvement of the MobileOne Block,using the Edge-MobileOne Block embedded with the Sobel operator to perform feature extraction and downsampling on the source images.The intermediate features at different scales obtained are then fused by a cross-modal attention fusion module.In addition,the Contrast Limited Adaptive Histogram Equalization(CLAHE)algorithm is used for image enhancement of both infrared and visible light images,guiding the network model to learn low-light enhancement capabilities through enhancement loss.Upon completion of network training,the Edge-MobileOne Block is optimized into a direct connection structure similar to MobileNetV1 through structural reparameterization,effectively reducing computational resource consumption.Finally,after extensive experimental comparisons,our method achieved improvements of 4.6%,40.5%,156.9%,9.2%,and 98.6%in the evaluation metrics Standard Deviation(SD),Visual Information Fidelity(VIF),Entropy(EN),and Spatial Frequency(SF),respectively,compared to the best results of the compared algorithms,while only being 1.5 ms/it slower in computation speed than the fastest method.
基金supported by the National Science Fund of China(Grant No.52225402)Fund of Inner Mongolia Research Institute,China University of Mining and Technology(Beijing)(Grant No.IMRI23003).
文摘High-intensive underground mining has caused severe ground fissures,resulting in environmental degradation.Consequently,prompt detection is crucial to mitigate their environmental impact.However,the accurate segmentation of fissuresin complex and variable scenes of visible imagery is a challenging issue.Our method,DeepFissureNets-Infrared-Visible(DFN-IV),highlights the potential of incorporating visible images with infrared information for improved ground fissuresegmentation.DFNIV adopts a two-step process.First,a fusion network is trained with the dual adversarial learning strategy fuses infrared and visible imaging,providing an integrated representation of fissuretargets that combines the structural information with the textual details.Second,the fused images are processed by a fine-tunedsegmentation network,which lever-ages knowledge injection to learn the distinctive characteristics of fissuretargets effectively.Furthermore,an infrared-visible ground fissuredataset(IVGF)is built from an aerial investigation of the Daliuta Coal Mine.Extensive experiments reveal that our approach provides superior accuracy over single-modality image strategies employed in fivesegmentation models.Notably,DeeplabV3+tested with DFN-IV improves by 9.7%and 11.13%in pixel accuracy and Intersection over Union(IoU),respectively,compared to solely visible images.Moreover,our method surpasses six state-of-the-art image fusion methods,achieving a 5.28%improvement in pixel accuracy and a 1.57%increase in IoU,respectively,compared to the second-best effective method.In addition,ablation studies further validate the significanceof the dual adversarial learning module and the integrated knowledge injection strategy.By leveraging DFN-IV,we aim to quantify the impacts of mining-induced ground fissures,facilitating the implementation of intelligent safety measures.
基金supported by Gansu Natural Science Foundation Programme(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Education,Science and Technology Innovation and Industry(No.2021CYZC-04)。
文摘Medical image fusion technology is crucial for improving the detection accuracy and treatment efficiency of diseases,but existing fusion methods have problems such as blurred texture details,low contrast,and inability to fully extract fused image information.Therefore,a multimodal medical image fusion method based on mask optimization and parallel attention mechanism was proposed to address the aforementioned issues.Firstly,it converted the entire image into a binary mask,and constructed a contour feature map to maximize the contour feature information of the image and a triple path network for image texture detail feature extraction and optimization.Secondly,a contrast enhancement module and a detail preservation module were proposed to enhance the overall brightness and texture details of the image.Afterwards,a parallel attention mechanism was constructed using channel features and spatial feature changes to fuse images and enhance the salient information of the fused images.Finally,a decoupling network composed of residual networks was set up to optimize the information between the fused image and the source image so as to reduce information loss in the fused image.Compared with nine high-level methods proposed in recent years,the seven objective evaluation indicators of our method have improved by 6%−31%,indicating that this method can obtain fusion results with clearer texture details,higher contrast,and smaller pixel differences between the fused image and the source image.It is superior to other comparison algorithms in both subjective and objective indicators.
基金supported by Qingdao Huanghai University School-Level ScientificResearch Project(2023KJ14)Undergraduate Teaching Reform Research Project of Shandong Provincial Department of Education(M2022328)+1 种基金National Natural Science Foundation of China under Grant(42472324)Qingdao Postdoctoral Foundation under Grant(QDBSH202402049).
文摘Multimodal image fusion plays an important role in image analysis and applications.Multimodal medical image fusion helps to combine contrast features from two or more input imaging modalities to represent fused information in a single image.One of the critical clinical applications of medical image fusion is to fuse anatomical and functional modalities for rapid diagnosis of malignant tissues.This paper proposes a multimodal medical image fusion network(MMIF-Net)based on multiscale hybrid attention.The method first decomposes the original image to obtain the low-rank and significant parts.Then,to utilize the features at different scales,we add amultiscalemechanism that uses three filters of different sizes to extract the features in the encoded network.Also,a hybrid attention module is introduced to obtain more image details.Finally,the fused images are reconstructed by decoding the network.We conducted experiments with clinical images from brain computed tomography/magnetic resonance.The experimental results show that the multimodal medical image fusion network method based on multiscale hybrid attention works better than other advanced fusion methods.
基金supported by the National Natural Science Foundation of China(Grant No.62302086)the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the Fundamental Research Funds for the Central Universities(Grant No.N2317005).
文摘Visible-infrared object detection leverages the day-night stable object perception capability of infrared images to enhance detection robustness in low-light environments by fusing the complementary information of visible and infrared images.However,the inherent differences in the imaging mechanisms of visible and infrared modalities make effective cross-modal fusion challenging.Furthermore,constrained by the physical characteristics of sensors and thermal diffusion effects,infrared images generally suffer from blurred object contours and missing details,making it difficult to extract object features effectively.To address these issues,we propose an infrared-visible image fusion network that realizesmultimodal information fusion of infrared and visible images through a carefully designedmultiscale fusion strategy.First,we design an adaptive gray-radiance enhancement(AGRE)module to strengthen the detail representation in infrared images,improving their usability in complex lighting scenarios.Next,we introduce a channelspatial feature interaction(CSFI)module,which achieves efficient complementarity between the RGB and infrared(IR)modalities via dynamic channel switching and a spatial attention mechanism.Finally,we propose a multi-scale enhanced cross-attention fusion(MSECA)module,which optimizes the fusion ofmulti-level features through dynamic convolution and gating mechanisms and captures long-range complementary relationships of cross-modal features on a global scale,thereby enhancing the expressiveness of the fused features.Experiments on the KAIST,M3FD,and FLIR datasets demonstrate that our method delivers outstanding performance in daytime and nighttime scenarios.On the KAIST dataset,the miss rate drops to 5.99%,and further to 4.26% in night scenes.On the FLIR and M3FD datasets,it achieves AP50 scores of 79.4% and 88.9%,respectively.
基金partially supported by China Postdoctoral Science Foundation(2023M730741)the National Natural Science Foundation of China(U22B2052,52102432,52202452,62372080,62302078)
文摘The goal of infrared and visible image fusion(IVIF)is to integrate the unique advantages of both modalities to achieve a more comprehensive understanding of a scene.However,existing methods struggle to effectively handle modal disparities,resulting in visual degradation of the details and prominent targets of the fused images.To address these challenges,we introduce Prompt Fusion,a prompt-based approach that harmoniously combines multi-modality images under the guidance of semantic prompts.Firstly,to better characterize the features of different modalities,a contourlet autoencoder is designed to separate and extract the high-/low-frequency components of different modalities,thereby improving the extraction of fine details and textures.We also introduce a prompt learning mechanism using positive and negative prompts,leveraging Vision-Language Models to improve the fusion model's understanding and identification of targets in multi-modality images,leading to improved performance in downstream tasks.Furthermore,we employ bi-level asymptotic convergence optimization.This approach simplifies the intricate non-singleton non-convex bi-level problem into a series of convergent and differentiable single optimization problems that can be effectively resolved through gradient descent.Our approach advances the state-of-the-art,delivering superior fusion quality and boosting the performance of related downstream tasks.Project page:https://github.com/hey-it-s-me/PromptFusion.
基金supported by Universiti Teknologi MARA through UiTM MyRA Research Grant,600-RMC 5/3/GPM(053/2022).
文摘Infrared and visible image fusion technology integrates the thermal radiation information of infrared images with the texture details of visible images to generate more informative fused images.However,existing methods often fail to distinguish salient objects from background regions,leading to detail suppression in salient regions due to global fusion strategies.This study presents a mask-guided latent low-rank representation fusion method to address this issue.First,the GrabCut algorithm is employed to extract a saliency mask,distinguishing salient regions from background regions.Then,latent low-rank representation(LatLRR)is applied to extract deep image features,enhancing key information extraction.In the fusion stage,a weighted fusion strategy strengthens infrared thermal information and visible texture details in salient regions,while an average fusion strategy improves background smoothness and stability.Experimental results on the TNO dataset demonstrate that the proposed method achieves superior performance in SPI,MI,Qabf,PSNR,and EN metrics,effectively preserving salient target details while maintaining balanced background information.Compared to state-of-the-art fusion methods,our approach achieves more stable and visually consistent fusion results.The fusion code is available on GitHub at:https://github.com/joyzhen1/Image(accessed on 15 January 2025).
基金Open Fund Project of Key Laboratory of Instrumentation Science&Dynamic Measurement(No.2DSYSJ2015005)Specialized Research Fund for the Doctoral Program of Ministry of Education Colleges(No.20121420110004)
文摘In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusion method,Principal Component Analysis(PCA)method has the shortcoming of losing small target,this paper presents a new fusion method of infrared polarization images based on combination of Nonsubsampled Shearlet Transformation(NSST)and improved PCA.This method can make full use of the effectiveness to image details expressed by NSST and the characteristics that PCA can highlight the main features of images.The combination of the two methods can integrate the complementary features of themselves to retain features of targets and image details fully.Firstly,intensity and polarization images are decomposed into low frequency and high frequency components with different directions by NSST.Secondly,the low frequency components are fused with improved PCA,while the high frequency components are fused by joint decision making rule with local energy and local variance.Finally,the fused image is reconstructed with the inverse NSST to obtain the final fused image of infrared polarization.The experiment results show that the method proposed has higher advantages than other methods in terms of detail preservation and visual effect.
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金This work was supported by the National Natural Science Foundation of China(62075169,62003247,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113).
文摘This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.
基金the National Natural Science Foundation of China(No.61105022)the Research Fund for the Doctoral Program of Higher Education of China(No.20110073120028)the Jiangsu Provincial Natural Science Foundation(No.BK2012296)
文摘In order to enhance the contrast of the fused image and reduce the loss of fine details in the process of image fusion,a novel fusion algorithm of infrared and visible images is proposed.First of all,regions of interest(RoIs)are detected in two original images by using saliency map.Then,nonsubsampled contourlet transform(NSCT)on both the infrared image and the visible image is performed to get a low-frequency sub-band and a certain amount of high-frequency sub-bands.Subsequently,the coefcients of all sub-bands are classified into four categories based on the result of RoI detection:the region of interest in the low-frequency sub-band(LSRoI),the region of interest in the high-frequency sub-band(HSRoI),the region of non-interest in the low-frequency sub-band(LSNRoI)and the region of non-interest in the high-frequency sub-band(HSNRoI).Fusion rules are customized for each kind of coefcients and fused image is achieved by performing the inverse NSCT to the fused coefcients.Experimental results show that the fusion scheme proposed in this paper achieves better efect than the other fusion algorithms both in visual efect and quantitative metrics.
基金supported by the National Natural Science Foundation of China (60802084)
文摘A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.
基金This work was supported by the National Natural Science Foundation of China(No.11775107)the Key Projects of Education Department of Hunan Province of China(No.16A184).
文摘In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.
基金the National Natural Science Foundation of China (Nos. 60775022 and 60705006)
文摘Image fusion can be performed at different levels:signal,pixel,feature and symbol levels.Almost all image fusion algorithms developed to date fall into pixel level.This paper provides an overview of the most widely used pixel-level image fusion algorithms and some comments about their relative strengths and weaknesses.Particular emphasis is placed on multiscale-based methods.Some performance measures practicable for pixel-level image fusion are also discussed.At last,prospects of pixel-level image fusion are made.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.
基金a grant from the Key Program of Science and Technology Foundation of Hubei Province (No. 2007A301B33).
文摘Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimensional con-formal radiation therapy, were studied. Each patient underwent twice sequential planning CT scan, i.e., at pre-treatment, and at mid-treatment for field reduction planning. Three treatment plans were established in each patient: treatment plan A was based on the pre-treatment planning CT scans for the first course of treatment, plan B on the mid-treatment planning CT scans for the second course of treatment, and treatment plan F on the fused images for the whole treatment. The irradiation doses received by organs at risk in the whole treatment with treatment A and B plans were estimated by the plus of the parameters in treatment plan A and B, assuming that the parameters involve the different tissues (i.e. V20=AV20+BV20), or the same tissues within an organ (i.e. Dmax=ADmax+BDmax). The assessment parameters in the treatment plan F were calculated on the basis of the DVH of the whole treatment. Then the above assessment results were compared. Results: There were marked differ-ences between the assessment results derived from the plus of assessment parameters in treatment plan A and B, and the ones derived from treatment plan F. Conclusion: When a treatment plan is altered during the course of radiation treatment, image fusion technique should be performed in the establishment of a new one. The estimation of the assessment parameters for the whole treatment with treatment plan A and B by simple plus, is inaccurate.
文摘High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue saturation (IHS) transform of the multi-spectral image first gives the intensity image. Thereafter, the 2D EMD in terms of row-column extension of the 1D EMD model is used to decompose the detailed scale image and coarse scale image from the high-resolution band image and the intensity image. Finally, a fused intensity image is obtained by reconstruction with high frequency of the high-resolution image and low frequency of the intensity image and IHS inverse transform result in the fused image. After presenting the EMD principle, a multi-scale decomposition and reconstruction algorithm of 2D EMD is defined and a fusion technique scheme is advanced based on EMD. Panchromatic band and multi-spectral band 3,2,1 of Quickbird are used to assess the quality of the fusion algorithm. After selecting the appropriate intrinsic mode function (IMF) for the merger on the basis of EMD analysis on specific row (column) pixel gray value series, the fusion scheme gives a fused image, which is compared with generally used fusion algorithms (wavelet, IHS, Brovey). The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. To assess quality of an image after fusion, information entropy and standard deviation are applied to assess spatial details of the fused images and correlation coefficient, bias index and warping degree for measuring distortion between the original image and fused image in terms of spectral information. For the proposed fusion algorithm, better results are obtained when EMD algorithm is used to perform the fusion experience.
基金supported by the National Natural Science Foundation of China(No.61301211)the China Scholarship Council(No.201906835017)
文摘In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.