摘要
低可见度场景中,图像质量下降不仅降低了特征与背景之间的可分辨性,还容易引发定位不准、类别混淆等问题,为此,提出了一种改进YOLOv11的目标检测算法。首先,构建多尺度注意力融合模块MulSimAM,用于增强模型对关键区域的特征提取能力;然后,设计动态特征融合模块DyPSA,以替代原有C2PSA结构,通过联合动态通道混合与空间注意力机制,提升了模型对不同区域特征的自适应感知能力;最后,提出了定位精度优化损失EnSIoU,结合尺度自适应距离调节、纵横比一致性约束与加权融合机制,提升物体在低可见度场景中的定位精度。实验结果表明,改进后的算法在自建的低可见度场景目标检测数据集上的mAP、准确率和召回率方面取得了更优表现,同时,消融实验表明,两个模块改进以及损失函数优化均在低可见度场景图像中表现出更强的目标检测能力。
In low-visibility environment,the degradation of image quality not only reduces the distinguishability between features and backgrounds,but also easily leads to problems such as inaccurate positioning and category confusion.Aiming at the problems,an improved YOLOv11 target detection algorithm is proposed.Firstly,a multi-scale attention fusion module MulSimAM is constructed to enhance the feature extraction ability of the model for key regions.The dynamic feature fusion module DyPSA is designed to replace the original C2PSA structure.By combining the dynamic channel mixing and spatial attention mechanism,the adaptive perception ability of the model to different regional features is improved.Finally,the positioning accuracy optimization loss EnSIoU is proposed,which combines scale adaptive distance adjustment,aspect ratio consistency constraint and weighted fusion mechanism to improve the positioning accuracy of objects in low visibility scenes.The experimental results show that the improved algorithm achieves better performance in mAP,precision and recall rate on the self-built,low-visibility scene target detection dataset.At the same time,the ablation experiment shows that the two module improvements and loss function optimization show stronger target detection ability in low-visibility images.
作者
王帅
丁其川
任帅
WANG Shuai;DING Qichuan;REN Shuai(Faculty of Robot Science and Engineering,Northeastern University,Shenyang 110000,China)
出处
《电光与控制》
北大核心
2026年第1期44-50,共7页
Electronics Optics & Control
基金
国家自然科学基金(62373086)。