期刊文献+

基于改进YOLO 11n模型的棉花田间复杂环境障碍物检测方法 被引量:2

Obstacle Detection Method for Complex Cotton Field Environments Based on Improved YOLO 11n Model
在线阅读 下载PDF
导出
摘要 针对棉花田间复杂环境障碍物被遮挡致准确检测难、边缘设备算力有限的问题,本文提出一种基于改进YOLO 11n模型的田间障碍物检测方法。首先,采用轻量级网络StarNet作为主要特征提取网络,并引入DBA模块(Dynamic position bias attention block)重构C2PSA(Convolutional block with parallel spatial attention),以增强多尺度特征之间的交互能力;其次,使用KAGNConv(Kolmogorov-Arnold generalized network convolution)替换基线模型C3k2(Cross stage partial with kernel size 2)模块中的瓶颈结构,实现对精细特征提取的同时,给予模型更高灵活性和可解释性;最后,集成分离与增强注意力模块(Separated and enhancement attention module,SEAM)至检测头,增强模型在遮挡场景中的检测能力。试验结果表明,改进模型YOLO 11n-SKS与基线模型相比精确率、召回率、mAP_(50)、mAP_(50-95)分别提升2.3、2.1、1.3、1.4个百分点,达到91.7%、88.3%、91.9%、62.3%,模型浮点数运算量仅为4.4×10^(9)FLOPs,模型参数量减少17.1%。本研究模型在性能和计算复杂度之间实现了较好的平衡,满足棉田收获作业场景中实时检测需求,降低了部署边缘设备算力要求,为采棉机自主安全作业提供技术支撑。 Aiming to address the challenges of accurate obstacle detection in complex cotton field environments due to occlusions and the computational limitations of edge devices,a field obstacle detection method based on improved YOLO 11n model was proposed.Firstly,the lightweight StarNet network was adopted as the primary feature extraction network,and the dynamic position bias attention block module(DBA)was introduced to reconstruct convolutional block with parallel spatial attention(C2PSA)to enhance multi-scale feature interaction.Secondly,Kolmogorov-Arnold generalized network convolution(KAGNConv)was used to replace the bottleneck structure in the cross stage partial with kernel size 2 module(C3k2)of the baseline model,enabling fine-grained feature extraction while improving model flexibility and interpretability.Finally,the separated and enhancement attention module(SEAM)was integrated into the detection head to enhance the model's detection capability in occlusion scenarios.The experimental results showed that,compared with the baseline model,the improved YOLO 11n-SKS achieved increases of 2.3,2.1,1.3,and 1.4 percentage points in precision,recall,mAP_(50),and mAP_(50-95),reaching 91.7%,88.3%,91.9%,and 62.3%,respectively.The model’s floatingpoint operations were reduced to only 4.4×10^(9)FLOPs,and the number of model parameters was decreased by 17.1%.This study achieved a favorable balance between performance and computational complexity,meeting the real-time detection requirements of cotton harvesting operations while lowering the computational demands for deployment on edge devices,thereby providing technical support for the autonomous and safe operation of cotton pickers.
作者 韩科立 王振坤 余永峰 刘淑平 韩树杰 郝付平 HAN Keli;WANG Zhenkun;YU Yongfeng;LIU Shuping;HAN Shujie;HAO Fuping(Chinese Academy of Agricultural Mechanization Sciences Group Co.,Ltd.,Beijing 100083,China;Modern Agricultural Equipment Co.,Ltd.,Beijing 100083,China;National Key Laboratory of Agricultural Equipment Technology,Beijing 100083,China)
出处 《农业机械学报》 北大核心 2025年第5期111-120,共10页 Transactions of the Chinese Society for Agricultural Machinery
基金 国家重点研发计划项目(2022YFD2002402) 中国机械工业集团有限公司重大科技专项(ZDZX2022-1)。
关键词 采棉机 障碍物检测 深度相机 YOLO 11n模型 目标识别 cotton picker obstacle detection depth camera YOLO 11n model object recognition
  • 相关文献

参考文献8

二级参考文献177

共引文献351

同被引文献33

引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部