摘要
为解决肉鸽养殖机器人自主导航精度、实时性、算力等要求,本研究将机器人本体算力需求转移至云服务器,提出一种基于改进YOLOv5s的云机器人视觉导航方法。首先,在YOLOv5s基础上采用Ghost-Shuffle Conv(GSConv)代替主干网络与颈部网络的传统卷积层,并精简了主干网络冗余的网络层。其次,在Spatial Pyramid Pooling-Fast(SPPF)模块中引入高效通道注意力机制(efficient channel attention,ECA),并在颈部网络中采用Ghost Bottleneck与ECA融合来替换C3模块,以减少参数量与计算量,实现网络轻量化,并提升对小目标的检测能力。模型训练结果显示,改进后的模型相较于原YOLOv5s模型,总参数量减少了75.57%,模型大小仅为3.7 MB,准确率P、精度均值mAP和召回率R分别提高了2.60、2.59和2.62个百分点,检测速度为51帧/s,即减少了7.2 ms。将此模型部署到云服务器上,通过压缩图像分辨率与减少模型参数,有效提高了图像传输速度,降低了云机器人自身的算力需求。在肉鸽养殖场不同光照和不同速度条件下进行的视觉导航试验结果表明,改进模型的导航算法最大横向偏差平均值为5.281 cm,绝对横向偏差平均值不超过1.474 cm,最大航向偏差平均值为5.455°,绝对航向偏差平均值不超过1.897°。可见,本研究所提出的改进模型用于肉鸽养殖机器人视觉导航,具有准确率高、速度快的特点,能为养殖场智能化生产提供技术参考。
In order to solve the autonomous navigation accuracy,real-time performance,computing power and other requirements of pigeon breeding robot,this paper transfers the computing power demand of the robot ontology to the cloud server,and proposes a visual navigation method for cloud robot based on improved YOLOv5s.Firstly,Ghost-Shuffle Conv(GS Conv)is used to replace the traditional convolutional layer of the backbone network and the neck network on the basis of YOLOv5s,and the redundant network layer of the backbone network is streamlined.Secondly,Efficient Channel Attention(ECA)is introduced into the Spatial Pyramid Pooling-Fast(SPPF)module,and the fusion of Ghost Bottleneck and Efficient Channel Attention(ECA)is used to replace the C3 module in the neck network,so as to reduce the number of parameters and calculation,realize the lightweight of the network,and improve the detection ability of small targets.The model training results show that compared with the original YOLOv5s model,the total number of parameters of the improved model is reduced by 75.57%,the model size is only 3.7 MB,the accuracy rate P,the mean accuracy mAP and the recall rate R are increased by 2.60,2.59 and 2.62 percentage points respectively,and the detection speed is 51 frames/s,that is,7.2 ms is reduced.The model is deployed to the cloud server,and the image transmission speed is effectively improved by compressing the image resolution and reducing the model parameters,and the computing power demand of the cloud robot itself is reduced.The visual navigation test results under different illumination and different speed conditions in the pigeon farm show that the average value of the maximum lateral deviation of the navigation algorithm of the improved model is 5.281 cm,the average value of the absolute lateral deviation is not more than 1.474 cm,the average value of the maximum heading deviation is 5.455°,and the average value of the absolute heading deviation is not more than 1.897°.It can be seen that the improved model proposed in this study is used for the visual navigation of the pigeon farming robot,which has the characteristics of high accuracy and fast speed,and can provide a technical reference for the intelligent production of the farm.
作者
陈家政
付根平
黄伟锋
胡宏男
张世昂
朱立学
CHEN Jiazheng;FU Genping;HUANG Weifeng;HU Hongnan;ZHANG Shiang;ZHU Lixue(College of Mechanical and Electrical Engineering,Zhongkai University of Agriculture and Engineering,Guangzhou 510225,China;School of Automation,Zhongkai University of Agriculture and Engineering,Guangzhou 510225,China;College of Innovation and Entrepreneurship Education,Zhongkai University of Agriculture and Engineering,Guangzhou 510225,China)
出处
《智能化农业装备学报(中英文)》
2025年第3期98-110,共13页
Journal of Intelligent Agricultural Mechanization
基金
广州市科技计划项目(2023B03J0862)
岭南现代农业科学与技术广东省实验室科研项目(NZ2021038)
国家自然科学基金(32472015)。