摘要
夜间辅助驾驶因光线不足导致图像质量下降,使得车辆检测面临低可见度、对比度降低以及噪声增多等问题.针对上述问题,提出了一种改进模型YOLOv8n-STH(you only look once version 8 nano-SPDConv-triplet attention-HS-FPN),模型以YOLOv8n模型为基础,针对夜间图像特征提取难、目标小等问题,在主干网络的部分C2f(faster implementation of CSP bottleneck with 2 convolutions)前加入SPD(space to depth)模块,并将部分卷积池化层替换为SPDConv.同时,在C2f结构中引入了轻量的三重注意力机制,使其更加准确地区分目标与背景.最后,使用了多尺度选择特征融合模块,使模型能够高效地筛选出更有效的特征信息.在两个数据集上的实验结果表明,YOLOv8n-STH相比于YOLOv8n模型,精确率提升了2.1%,模型大小减少了24.2%,能部署在资源有限的环境中.
Nighttime assisted driving suffers from image quality degradation due to insufficient light,which causes vehicle detection to face problems such as low visibility,reduced contrast,and increased noise.To address these issues,an improved model,you only look once version 8 nano-SPDConv-triplet attention-HS-FPN(YOLOv8n-STH),is proposed.The model is based on the YOLOv8n and targets problems such as difficult feature extraction and small targets in nighttime images.The space to depth(SPD)module is added in front of part of the faster implementation of CSP bottleneck with 2 convolutions(C2f)structure in the backbone network,and part of the convolutional pooling layers is replaced with SPDConv.Meanwhile,a lightweight triplet attention mechanism is introduced into the C2f structure to enable more accurate distinction between targets and background.Finally,a multi-scale selection feature fusion module is employed to allow the model to efficiently filter more effective feature information.Experimental results on two datasets show that YOLOv8n-STH improves the accuracy rate by 2.1%and reduces the model size by 24.2%compared to the YOLOv8n model,making it suitable for deployment in resource-limited environments.
作者
吴湘宁
潘志鹏
王梦雪
涂雨
WU Xiang-Ning;PAN Zhi-Peng;WANG Meng-Xue;TU Yu(School of Computer Science,China University of Geosciences,Wuhan 430078,China)
出处
《计算机系统应用》
2025年第12期249-259,共11页
Computer Systems & Applications
基金
国家自然科学基金(U21A2013)
湖北省自然科学基金(2021CFB506)。