In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,eff...In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).展开更多
This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.O...This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks(e.g.,spacecraft on-orbit connection,spacecraft surface repair,space debris capture)that rely on clear visual information.Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions:(1)an improved U-Net(IU-Net)generator with multi-scale feature fusion in the contracting path for richer semantic feature extraction,and(2)a Global Illumination Attention Module(GIA)at the end of the contracting path to couple local and global information,significantly improving detail recovery and illumination adjustment.The proposed algorithm operates in an unsupervised manner.It is trained and evaluated on our self-constructed,unpaired Spacecraft Dataset for Detection,Enforcement,and Parts Recognition(SDDEP),designed specifically for low-light enhancement tasks.Extensive experiments demonstrate that our method outperforms the baseline EnlightenGAN,achieving improvements of 2.7%in structural similarity(SSIM),4.7%in peak signal-to-noise ratio(PSNR),6.3%in learning perceptual image patch similarity(LPIPS),and 53.2%in DeltaE 2000.Qualitatively,the enhanced images exhibit higher overall and local brightness,improved contrast,and more natural visual effects.展开更多
基金funded by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229)+3 种基金Talent Research Initiation Fund Project of Tongling University(2024tlxyrc019)Tongling University School-Level Scientific Research Project(2024tlxyptZD07)TheUniversity Synergy Innovation Programof Anhui Province(GXXT-2023-050)Tongling City Science and Technology Major Special Project(Unveiling and Commanding Model)(200401JB004).
文摘In the image fusion field,fusing infrared images(IRIs)and visible images(VIs)excelled is a key area.The differences between IRIs and VIs make it challenging to fuse both types into a high-quality image.Accordingly,efficiently combining the advantages of both images while overcoming their shortcomings is necessary.To handle this challenge,we developed an end-to-end IRI andVI fusionmethod based on frequency decomposition and enhancement.By applying concepts from frequency domain analysis,we used the layering mechanism to better capture the salient thermal targets from the IRIs and the rich textural information from the VIs,respectively,significantly boosting the image fusion quality and effectiveness.In addition,the backbone network combined Restormer Blocks and Dense Blocks;Restormer blocks utilize global attention to extract shallow features.Meanwhile,Dense Blocks ensure the integration between shallow and deep features,thereby avoiding the loss of shallow attributes.Extensive experiments on TNO and MSRS datasets demonstrated that the suggested method achieved state-of-the-art(SOTA)performance in various metrics:Entropy(EN),Mutual Information(MI),Standard Deviation(SD),The Structural Similarity Index Measure(SSIM),Fusion quality(Qabf),MI of the pixel(FMI_(pixel)),and modified Visual Information Fidelity(VIF_(m)).
基金supported by Anhui Province University Key Science and Technology Project(2024AH053415)Anhui Province University Major Science and Technology Project(2024AH040229).
文摘This research addresses the critical challenge of enhancing satellite images captured under low-light conditions,which suffer from severely degraded quality,including a lack of detail,poor contrast,and low usability.Overcoming this limitation is essential for maximizing the value of satellite imagery in downstream computer vision tasks(e.g.,spacecraft on-orbit connection,spacecraft surface repair,space debris capture)that rely on clear visual information.Our key novelty lies in an unsupervised generative adversarial network featuring two main contributions:(1)an improved U-Net(IU-Net)generator with multi-scale feature fusion in the contracting path for richer semantic feature extraction,and(2)a Global Illumination Attention Module(GIA)at the end of the contracting path to couple local and global information,significantly improving detail recovery and illumination adjustment.The proposed algorithm operates in an unsupervised manner.It is trained and evaluated on our self-constructed,unpaired Spacecraft Dataset for Detection,Enforcement,and Parts Recognition(SDDEP),designed specifically for low-light enhancement tasks.Extensive experiments demonstrate that our method outperforms the baseline EnlightenGAN,achieving improvements of 2.7%in structural similarity(SSIM),4.7%in peak signal-to-noise ratio(PSNR),6.3%in learning perceptual image patch similarity(LPIPS),and 53.2%in DeltaE 2000.Qualitatively,the enhanced images exhibit higher overall and local brightness,improved contrast,and more natural visual effects.