Accurate vehicle detection is essential for autonomous driving,traffic monitoring,and intelligent transportation systems.This paper presents an enhanced YOLOv8n model that incorporates the Ghost Module,Convolutional B...Accurate vehicle detection is essential for autonomous driving,traffic monitoring,and intelligent transportation systems.This paper presents an enhanced YOLOv8n model that incorporates the Ghost Module,Convolutional Block Attention Module(CBAM),and Deformable Convolutional Networks v2(DCNv2).The Ghost Module streamlines feature generation to reduce redundancy,CBAM applies channel and spatial attention to improve feature focus,and DCNv2 enables adaptability to geometric variations in vehicle shapes.These components work together to improve both accuracy and computational efficiency.Evaluated on the KITTI dataset,the proposed model achieves 95.4%mAP@0.5—an 8.97% gain over standard YOLOv8n—along with 96.2% precision,93.7% recall,and a 94.93%F1-score.Comparative analysis with seven state-of-the-art detectors demonstrates consistent superiority in key performance metrics.An ablation study is also conducted to quantify the individual and combined contributions of GhostModule,CBAM,and DCNv2,highlighting their effectiveness in improving detection performance.By addressing feature redundancy,attention refinement,and spatial adaptability,the proposed model offers a robust and scalable solution for vehicle detection across diverse traffic scenarios.展开更多
为了降低服装目标检测模型的参数量和浮点型计算量,提出一种改进的轻量级服装目标检测模型——GYOLOv5s.首先使用Ghost卷积重构YOLOv5s的主干网络;然后使用DeepFashion2数据集中的部分数据进行模型训练和验证;最后将训练好的模型用于服...为了降低服装目标检测模型的参数量和浮点型计算量,提出一种改进的轻量级服装目标检测模型——GYOLOv5s.首先使用Ghost卷积重构YOLOv5s的主干网络;然后使用DeepFashion2数据集中的部分数据进行模型训练和验证;最后将训练好的模型用于服装图像的目标检测.实验结果表明,G-YOLOv5s的mAP达到71.7%,模型体积为9.09 MB,浮点型计算量为9.8 G FLOPs,与改进前的YOLOv5s网络相比,模型体积压缩了34.8%,计算量减少了41.3%,精度仅下降1.3%,方便部署在资源有限的设备中使用.展开更多
文摘Accurate vehicle detection is essential for autonomous driving,traffic monitoring,and intelligent transportation systems.This paper presents an enhanced YOLOv8n model that incorporates the Ghost Module,Convolutional Block Attention Module(CBAM),and Deformable Convolutional Networks v2(DCNv2).The Ghost Module streamlines feature generation to reduce redundancy,CBAM applies channel and spatial attention to improve feature focus,and DCNv2 enables adaptability to geometric variations in vehicle shapes.These components work together to improve both accuracy and computational efficiency.Evaluated on the KITTI dataset,the proposed model achieves 95.4%mAP@0.5—an 8.97% gain over standard YOLOv8n—along with 96.2% precision,93.7% recall,and a 94.93%F1-score.Comparative analysis with seven state-of-the-art detectors demonstrates consistent superiority in key performance metrics.An ablation study is also conducted to quantify the individual and combined contributions of GhostModule,CBAM,and DCNv2,highlighting their effectiveness in improving detection performance.By addressing feature redundancy,attention refinement,and spatial adaptability,the proposed model offers a robust and scalable solution for vehicle detection across diverse traffic scenarios.
文摘为了降低服装目标检测模型的参数量和浮点型计算量,提出一种改进的轻量级服装目标检测模型——GYOLOv5s.首先使用Ghost卷积重构YOLOv5s的主干网络;然后使用DeepFashion2数据集中的部分数据进行模型训练和验证;最后将训练好的模型用于服装图像的目标检测.实验结果表明,G-YOLOv5s的mAP达到71.7%,模型体积为9.09 MB,浮点型计算量为9.8 G FLOPs,与改进前的YOLOv5s网络相比,模型体积压缩了34.8%,计算量减少了41.3%,精度仅下降1.3%,方便部署在资源有限的设备中使用.