期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
A Semantic-Guided State-Space Learning Framework for Low-Light Image Enhancement
1
作者 Xi Cai Xiaoqiang Wang +1 位作者 Huiying Zhao Guang Han 《Computers, Materials & Continua》 2026年第5期1137-1157,共21页
Low-light image enhancement(LLIE)remains challenging due to underexposure,color distortion,and amplified noise introduced during illumination correction.Existing deep learning–based methods typically apply uniform en... Low-light image enhancement(LLIE)remains challenging due to underexposure,color distortion,and amplified noise introduced during illumination correction.Existing deep learning–based methods typically apply uniform enhancement across the entire image,which overlooks scene semantics and often leads to texture degradation or unnatural color reproduction.To overcome these limitations,we propose a Semantic-Guided Visual Mamba Network(SGVMNet)that unifies semantic reasoning,state-space modeling,and mixture-of-experts routing for adaptive illumination correction.SGVMNet comprises three key components:(1)a semantic modulation module(SMM)that extracts scene-aware semantic priors from pretrained multimodal models—Large Language and Vision Assistant(LLaVA)and Contrastive Language–Image Pretraining(CLIP)—and injects them hierarchically into the feature stream;(2)aMixture-of-Experts State-Space Feature EnhancementModule(MoE-SSMFEM)that dynamically selects informative channels and activates specialized state-space experts for efficient global–local illumination modeling;and(3)a Text-Guided Mixture Mamba Block(TGMB)that fuses semantic priors and visual features through bidirectional state propagation.Experimental results demonstrate that on the low-light(LOL)dataset,SGVMNet outperforms other state-of-the-art methods in both quantitative and qualitative evaluations,and it also maintains low computational complexity with fast inference speed.On LOLv2-Syn,SGVMNet achieves 26.512 dB PSNR and 0.935 SSIM,outperforming RetinexFormer by 0.61 dB.On LOLv1,SGVMNet attains 26.50 dB PSNR and 0.863 SSIM.Furthermore,experiments on multiple unpaired real-world datasets further validate the superiority of SGVMNet,showing that the model not only exhibits strong cross-scene generalization ability but also effectively preserves semantic consistency and visual naturalness. 展开更多
关键词 Noise interference attention mechanism Vision Mamba semantic modulation low-light image enhancement
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部