摘要
茶芽分类识别是名优茶生产中十分重要的环节。针对目前茶芽识别算法模型尺寸大、计算量大且无法区分采摘形态等问题,本研究以YOLOv5s为基线模型,提出了一种改进的鲜茶叶识别模型(YOLOv5s-SPCS)。首先,收集实验室环境和自然环境下的鲜茶叶图像制作鲜茶叶数据集。其次,基于ShuffleNetV2思想构建Shuffle Block模块替换主干网络中的卷积模块,在减少模型参数量和计算量的同时提高特征提取速度。然后,在颈部网络引入部分卷积结构PConv和无参数注意力机制SimAM构建C3-PCS模块替换原C3结构,减少模型计算冗余和内存访问,提高识别精度。最后,采用SIoU作为边界框损失函数,提高预测框收敛速度和收敛精度。试验结果表明,YOLOv5s-SPCS模型参数量、计算量和权重文件大小分别为YOLOv5s模型的14%、14%和16%,对鲜茶叶识别准确率为81.8%、平均精度均值为82.4%,相较于原始模型,准确率提升了2.7个百分点,平均精度均值保持不变。此外,改进后的YOLOv5s-SPCS模型整体性能优于当前常用的Faster R-CNN、SSD、YOLOv3、YOLOv4等目标检测模型。本研究可为鲜茶叶识别分类及后续移动端部署提供有效的技术支持。
The classification and recognition of tea buds represents a crucial aspect of renowned tea production.In view of the problems of large model size,large computational complexity and inability to distinguish the picking morphology of the current tea bud recognition algorithm,this study proposes an enhanced fresh tea leaf recognition model(YOLOv5s-SPCS)based on YOLOv5s as the foundational model.Firstly,images of fresh tea leaves were collected in both laboratory and natural environments to create a dataset of fresh tea leaves.This was done through offline and online collection of images in multiple scenarios,with the resulting images divided into a training set and a test set.Secondly,the Shuffle Block module was constructed based on the ShuffleNetV2 idea for replacing the convolution module in YOLOv5s backbone network,which reduced the number of model parameters and the amount of computation while increasing the speed of feature extraction.Subsequently,the Partial Convolution structure,PConv and SimAM were incorporated into the neck network to construct the C3-PCS module,replacing the original C3 structure which further reduced the model computational redundancy and memory access,while improving the recognition accuracy with a minimal increase in the number of parameters.Finally,the SIoU bounding box loss function was employed to enhance the convergence velocity and precision of the prediction frame.In addition to accelerating the convergence of the model prediction frame regression,the use of this loss function also generates more accurately positioned prediction frames.The experimental results demonstrate that the enhanced YOLOv5s-SPCS model exhibits 14%,14%and 16%of the YOLOv5s model in terms of the number of parameters,computational volume and weight file.The size of the model is,respectively,with an accuracy of 81.8%and a mean average precision(mAP)of 82.4%for the fresh tea image recognition,which is 2.7%more accurate than the original model.The accuracy was enhanced by 2.7 percentage points,while the mean average precision of mAP remained unaltered.Furthermore,the overall performance of the enhanced YOLOv5s-SPCS model is superior to that of the prevailing target detection models,including Faster R-CNN,SSD,YOLOv3,and YOLOv4.This study offers a valuable technical foundation for fresh tea leaves recognition classification and subsequent mobile deployment.
作者
吴擎
韦润轩
周乐
杨浩
刘婉茹
徐红梅
WU Qing;WEI Runxuan;ZHOU Le;YANG Hao;LIU Wanru;XU Hongmei(College of Engineering,Huazhong Agricultural University,Wuhan 430070,China;Key Laboratory of Agricultural Equipment in Mid-Lower Yangtze River Ministry of Agriculture and Rural Affairs,Wuhan 430070,China)
出处
《智能化农业装备学报(中英文)》
2025年第1期1-14,共14页
Journal of Intelligent Agricultural Mechanization
基金
湖北省高等学校优秀中青年科技创新团队计划项目(T201934)。