期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
基于Dim env-YOLO算法的昏暗场景车辆多目标检测 被引量:17
1
作者 郭克友 王苏东 +1 位作者 李雪 张沫 《计算机工程》 CAS CSCD 北大核心 2023年第3期312-320,共9页
低照度的夜间路况复杂,现有夜间车辆识别相关研究较少,且存在识别方法实时性不高、过多占用硬件资源等不足。针对夜间场景车辆识别干扰因素较多、检测效果不佳的问题,提出一种基于YOLOv4的Dim env-YOLO车辆目标检测算法。利用MobileNetV... 低照度的夜间路况复杂,现有夜间车辆识别相关研究较少,且存在识别方法实时性不高、过多占用硬件资源等不足。针对夜间场景车辆识别干扰因素较多、检测效果不佳的问题,提出一种基于YOLOv4的Dim env-YOLO车辆目标检测算法。利用MobileNetV3网络替换原始YOLOv4中的主干网络,以减少模型参数量。在改进的YOLOv4模型上使用图像暗光增强方法,提高车辆目标在昏暗环境中的可识别性。在此基础上,引入注意力机制加强特征信息选择,同时利用深度可分离卷积降低网络计算量。选取北京部分道路的夜间场景图片自制数据集并进行实验验证,结果表明,在存在高斯噪声、模糊扰动、雨雾夜晚等情况下,Dim env-YOLO算法的测试结果较稳定,对于照度低于30 lx的昏暗条件下的车流,其检测mAP值达到90.49%,对于最常见的轿车类别,mAP值达到96%以上,优于Faster-RCNN、YOLOv3、YOLOv4等网络模型在昏暗光照条件下的检测效果。 展开更多
关键词 昏暗场景 车辆检测 深度可分离卷积 Dim env-YOLO算法 MobileNetV3网络
在线阅读 下载PDF
Recognition of tea buds based on an improved YOLOv7 model
2
作者 Mengxue Song Ce Liu +3 位作者 Liqing Chen Lichao Liu Jingming Ning Chuanyang Yu 《International Journal of Agricultural and Biological Engineering》 2024年第6期238-244,共7页
The traditional recognition algorithm is prone to miss detection targets in the complex tea garden environment,and it is difficult to satisfy the requirement for tea bud recognition accuracy and efficiency.In this stu... The traditional recognition algorithm is prone to miss detection targets in the complex tea garden environment,and it is difficult to satisfy the requirement for tea bud recognition accuracy and efficiency.In this study,the YOLOv7 model was developed to improve tea bud recognition accuracy for some extreme tea garden scenarios.In the improved model,a lightweight MobileNetV3 network is adopted to replace the original backbone network,which reduces the size of the model and improves detection efficiency.The convolutional block attention module is introduced to enhance the attention to the features of small and occluded tea buds,suppressing the interference of the complex tea garden environment on tea bud recognition and strengthening the feature extraction capability of the recognition model.Moreover,to further improve recognition accuracy for dense and occlusive scenarios,the soft non-maximum suppression strategy is integrated into the recognition model.Experimental results show that the improved YOLOv7 model has the precision,recall,and mean average precision(mAP)values of 88.3%,87.4%,and 88.5%,respectively.Compared with the Faster R-CNN,SSD,and original YOLOv7 algorithms,the mAP of the improved YOLOv7 model is increased by 7.4,7.9,and 3.9 percentage points,respectively,and its recognition speed is also promoted by 94.9%,46.2%,and 16.9%.The proposed model can rapidly and accurately identify the tea buds in multiple complex tea garden scenarios-such as dense distribution,being close to the background color,and mutual occlusion-with high generalization and robustness,which can provide theoretical and technical support for the recognition of tea-picking robots. 展开更多
关键词 tea bud recognition YOLOv7 lightweight MobileNetV3 network CBAM
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部