期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
BAID:A Lightweight Super-Resolution Network with Binary Attention-Guided Frequency-Aware Information Distillation
1
作者 Jiajia Liu Junyi Lin +3 位作者 Wenxiang Dong Xuan Zhao Jianhua Liu Huiru Li 《Computers, Materials & Continua》 2026年第2期1190-1208,共19页
Single Image Super-Resolution(SISR)seeks to reconstruct high-resolution(HR)images from lowresolution(LR)inputs,thereby enhancing visual fidelity and the perception of fine details.While Transformer-based models—such ... Single Image Super-Resolution(SISR)seeks to reconstruct high-resolution(HR)images from lowresolution(LR)inputs,thereby enhancing visual fidelity and the perception of fine details.While Transformer-based models—such as SwinIR,Restormer,and HAT—have recently achieved impressive results in super-resolution tasks by capturing global contextual information,these methods often suffer from substantial computational and memory overhead,which limits their deployment on resource-constrained edge devices.To address these challenges,we propose a novel lightweight super-resolution network,termed Binary Attention-Guided Information Distillation(BAID),which integrates frequency-aware modeling with a binary attention mechanism to significantly reduce computational complexity and parameter count whilemaintaining strong reconstruction performance.The network combines a high–low frequency decoupling strategy with a local–global attention sharing mechanism,enabling efficient compression of redundant computations through binary attention guidance.At the core of the architecture lies the Attention-Guided Distillation Block(AGDB),which retains the strengths of the information distillation framework while introducing a sparse binary attention module to enhance both inference efficiency and feature representation.Extensive×4 superresolution experiments on four standard benchmarks—Set5,Set14,BSD100,and Urban100—demonstrate that BAID achieves Peak Signal-to-Noise Ratio(PSNR)values of 32.13,28.51,27.47,and 26.15,respectively,with only 1.22 million parameters and 26.1 G Floating-Point Operations(FLOPs),outperforming other state-of-the-art lightweight methods such as Information Multi-Distillation Network(IMDN)and Residual Feature Distillation Network(RFDN).These results highlight the proposed model’s ability to deliver high-quality image reconstruction while offering strong deployment efficiency,making it well-suited for image restoration tasks in resource-limited environments. 展开更多
关键词 Single image super-resolution lightweight network binary attention information distillation
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部