Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic an...Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic and lightweight SR framework designed for arbitrary scaling factors.DDNet integrates a residual learning structure with an Adaptively fusion Feature Block(AFB)and a scale-aware upsampling module,effectively reducing parameter overhead while preserving reconstruction quality.Additionally,we introduce DDNetGAN,an enhanced variant that leverages a relativistic Generative Adversarial Network(GAN)to further improve texture realism.To validate the proposed models,we conduct extensive training using the DIV2K and Flickr2K datasets and evaluate performance across standard benchmarks including Set5,Set14,Urban100,Manga109,and BSD100.Our experiments cover both symmetric and asymmetric upscaling factors and incorporate ablation studies to assess key components.Results show that DDNet and DDNetGAN achieve competitive performance compared with mainstream SR algorithms,demonstrating a strong balance between accuracy,efficiency,and flexibility.These findings highlight the potential of our approach for practical real-world super-resolution applications.展开更多
基金supported by Sichuan Science and Technology Program[2023YFSY0026,2023YFH0004].
文摘Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic and lightweight SR framework designed for arbitrary scaling factors.DDNet integrates a residual learning structure with an Adaptively fusion Feature Block(AFB)and a scale-aware upsampling module,effectively reducing parameter overhead while preserving reconstruction quality.Additionally,we introduce DDNetGAN,an enhanced variant that leverages a relativistic Generative Adversarial Network(GAN)to further improve texture realism.To validate the proposed models,we conduct extensive training using the DIV2K and Flickr2K datasets and evaluate performance across standard benchmarks including Set5,Set14,Urban100,Manga109,and BSD100.Our experiments cover both symmetric and asymmetric upscaling factors and incorporate ablation studies to assess key components.Results show that DDNet and DDNetGAN achieve competitive performance compared with mainstream SR algorithms,demonstrating a strong balance between accuracy,efficiency,and flexibility.These findings highlight the potential of our approach for practical real-world super-resolution applications.