Imaging through scattering media faces a critical challenge:deep-learning-based methods inherently suppress high-frequency speckle information,limiting the recovery of fine textures and edges.To overcome this spectral...Imaging through scattering media faces a critical challenge:deep-learning-based methods inherently suppress high-frequency speckle information,limiting the recovery of fine textures and edges.To overcome this spectral bias,we introduce the concept of the relative speckle frequency domain(RsFD),which redefines high-frequency features as learnable,adaptive components via frequency-domain decomposition.We demonstrate that independently processing generalized high-frequency speckle components enables neural networks to capture latent target details previously obscured in conventional approaches.Leveraging this principle,we design FDUnet,a dualbranch network comprising a low-frequency sub-network(Lnet)for global structure reconstruction and a relative high-frequency sub-network(RHnet)dedicated to enhancing textures and edges.Experiments confirm FDUnet's superiority:it outperforms state-of-the-art methods in both visual fidelity and quantitative metrics by +5.9% to 8.7% in SSIM and+5.4 to 7.9 dB in PSNR across diverse datasets(MNIST,Fashion-MNIST,FERET).These enhancements translate into notable improvements in the preservation of textures and edges,especially exhibiting exceptional robustness to multimode fiber perturbations.This work bridges the gap between physical priors and neural network learning,unlocking new potentials for high-fidelity applications,such as biomedical endoscopic imaging,in dynamic scattering environments.展开更多
基金National Natural Science Foundation of China(62362037)Fundamental Research Funds for the Central Universities(30919011401,30920010001)+3 种基金Natural Science Foundation of Jiangxi Province(20224ACB202011)Jiangsu Province Key Research and Development Project(BE2023817)Hong Kong Research Grant Council(15217721,15125724,C7074-21GF)Hong Kong Polytechnic University(P0045680,P0043485,P0045762,P0049101)。
文摘Imaging through scattering media faces a critical challenge:deep-learning-based methods inherently suppress high-frequency speckle information,limiting the recovery of fine textures and edges.To overcome this spectral bias,we introduce the concept of the relative speckle frequency domain(RsFD),which redefines high-frequency features as learnable,adaptive components via frequency-domain decomposition.We demonstrate that independently processing generalized high-frequency speckle components enables neural networks to capture latent target details previously obscured in conventional approaches.Leveraging this principle,we design FDUnet,a dualbranch network comprising a low-frequency sub-network(Lnet)for global structure reconstruction and a relative high-frequency sub-network(RHnet)dedicated to enhancing textures and edges.Experiments confirm FDUnet's superiority:it outperforms state-of-the-art methods in both visual fidelity and quantitative metrics by +5.9% to 8.7% in SSIM and+5.4 to 7.9 dB in PSNR across diverse datasets(MNIST,Fashion-MNIST,FERET).These enhancements translate into notable improvements in the preservation of textures and edges,especially exhibiting exceptional robustness to multimode fiber perturbations.This work bridges the gap between physical priors and neural network learning,unlocking new potentials for high-fidelity applications,such as biomedical endoscopic imaging,in dynamic scattering environments.