切段式机收甘蔗含杂率的自动测量可以客观评估机收甘蔗到糖厂入榨前的质量。针对现有抽样称重估算杂质方式效率低且主观性强的问题,以及因田间环境较为复杂使得检测目标蔗段存在运动状态变换导致的模糊、光照强度变化和蔗叶遮挡等技术难...切段式机收甘蔗含杂率的自动测量可以客观评估机收甘蔗到糖厂入榨前的质量。针对现有抽样称重估算杂质方式效率低且主观性强的问题,以及因田间环境较为复杂使得检测目标蔗段存在运动状态变换导致的模糊、光照强度变化和蔗叶遮挡等技术难点,提出了一种基于改进YOLOv5安装在切段式甘蔗机上的机收蔗含杂率检测的方法。首先,针对工业相机拍摄的蔗段目标为小目标的应用场景,增加小目标检测层,增强网络模型对其的专注;其次,将C3模块替换成C2f模块,提高网络模型对小物体、低对比度目标的检测速度和检测精度;最后,加入加权交并比WIoU(Weighted Intersection over Union)损失函数,提升预测框的回归精度,增强数据集训练效果。试验结果表明:基于改进YOLOv5的机收蔗含杂率检测模型,蔗段识别准确率达95.2%、mAP(mean Average Precision)值为62.5%,相较于原始YOLOv5模型分别提高了15.3、13.5个百分点,性能优于YOLOv7、YOLOv8等模型。在台架试验中,改进后模型检测的含杂率平均相对误差为19.58%,比改进前模型降低了38.12个百分点;含杂率平均值为7.31%,比人工测量的实际含杂率高出0.05个百分点。因此,此方法是一种实时性强、效率高、准确性高且能全量检测机收蔗含杂率的方法,能够为田间甘蔗收获作业质量提供技术支撑。展开更多
Detecting individuals wearing safety helmets in complex environments faces several challenges.These factors include limited detection accuracy and frequent missed or false detections.Additionally,existing algorithms o...Detecting individuals wearing safety helmets in complex environments faces several challenges.These factors include limited detection accuracy and frequent missed or false detections.Additionally,existing algorithms often have excessive parameter counts,complex network structures,and high computational demands.These challenges make it difficult to deploy such models efficiently on resource-constrained devices like embedded systems.Aiming at this problem,this research proposes an optimized and lightweight solution called FGP-YOLOv8,an improved version of YOLOv8n.The YOLOv8 backbone network is replaced with the FasterNet model to reduce parameters and computational demands while local convolution layers are added.This modification minimizes computational costs with only a minor impact on accuracy.A new GSTA(GSConv-Triplet Attention)module is introduced to enhance feature fusion and reduce computational complexity.This is achieved using attention weights generated from dimensional interactions within the feature map.Additionally,the ParNet-C2f module replaces the original C2f(CSP Bottleneck with 2 Convolutions)module,improving feature extraction for safety helmets of various shapes and sizes.The CIoU(Complete-IoU)is replaced with the WIoU(Wise-IoU)to boost performance further,enhancing detection accuracy and generalization capabilities.Experimental results validate the improvements.The proposedmodel reduces the parameter count by 19.9% and the computational load by 18.5%.At the same time,mAP(mean average precision)increases by 2.3%,and precision improves by 1.2%.These results demonstrate the model’s robust performance in detecting safety helmets across diverse environments.展开更多
文摘切段式机收甘蔗含杂率的自动测量可以客观评估机收甘蔗到糖厂入榨前的质量。针对现有抽样称重估算杂质方式效率低且主观性强的问题,以及因田间环境较为复杂使得检测目标蔗段存在运动状态变换导致的模糊、光照强度变化和蔗叶遮挡等技术难点,提出了一种基于改进YOLOv5安装在切段式甘蔗机上的机收蔗含杂率检测的方法。首先,针对工业相机拍摄的蔗段目标为小目标的应用场景,增加小目标检测层,增强网络模型对其的专注;其次,将C3模块替换成C2f模块,提高网络模型对小物体、低对比度目标的检测速度和检测精度;最后,加入加权交并比WIoU(Weighted Intersection over Union)损失函数,提升预测框的回归精度,增强数据集训练效果。试验结果表明:基于改进YOLOv5的机收蔗含杂率检测模型,蔗段识别准确率达95.2%、mAP(mean Average Precision)值为62.5%,相较于原始YOLOv5模型分别提高了15.3、13.5个百分点,性能优于YOLOv7、YOLOv8等模型。在台架试验中,改进后模型检测的含杂率平均相对误差为19.58%,比改进前模型降低了38.12个百分点;含杂率平均值为7.31%,比人工测量的实际含杂率高出0.05个百分点。因此,此方法是一种实时性强、效率高、准确性高且能全量检测机收蔗含杂率的方法,能够为田间甘蔗收获作业质量提供技术支撑。
基金funded by National Natural Science Foundation of China(61741303)the Foundation Project of Guangxi Key Laboratory of Spatial Information andMapping(No.21-238-21-16).
文摘Detecting individuals wearing safety helmets in complex environments faces several challenges.These factors include limited detection accuracy and frequent missed or false detections.Additionally,existing algorithms often have excessive parameter counts,complex network structures,and high computational demands.These challenges make it difficult to deploy such models efficiently on resource-constrained devices like embedded systems.Aiming at this problem,this research proposes an optimized and lightweight solution called FGP-YOLOv8,an improved version of YOLOv8n.The YOLOv8 backbone network is replaced with the FasterNet model to reduce parameters and computational demands while local convolution layers are added.This modification minimizes computational costs with only a minor impact on accuracy.A new GSTA(GSConv-Triplet Attention)module is introduced to enhance feature fusion and reduce computational complexity.This is achieved using attention weights generated from dimensional interactions within the feature map.Additionally,the ParNet-C2f module replaces the original C2f(CSP Bottleneck with 2 Convolutions)module,improving feature extraction for safety helmets of various shapes and sizes.The CIoU(Complete-IoU)is replaced with the WIoU(Wise-IoU)to boost performance further,enhancing detection accuracy and generalization capabilities.Experimental results validate the improvements.The proposedmodel reduces the parameter count by 19.9% and the computational load by 18.5%.At the same time,mAP(mean average precision)increases by 2.3%,and precision improves by 1.2%.These results demonstrate the model’s robust performance in detecting safety helmets across diverse environments.
文摘针对X射线焊缝缺陷检测中存在的漏检、检测精度低等问题,提出一种X射线焊缝缺陷检测方法。首先,将高效多尺度注意力(Efficient Multi-scale Attention,EMA)模块中3×3卷积核替换为5×5卷积核,以扩大感受野,同时将平均池化改为多尺度池化,以提取多尺度特征。将改进后的EMA模块嵌入主干网络,增强多尺度缺陷检测能力。然后,引入自适应平均池化层和最大池化层,改进空间金字塔池化模块,提升对焊缝边缘信息的感知能力。最后,在颈部采用Dual卷积替代传统卷积,降低模型参数量,并使用WIoU(Wise Intersection over Union)损失函数替代CIoU(Complete Intersection over Union)损失函数,提高模型的收敛速度。实验结果表明,与YOLOv8n相比,所提方法的参数量降低了4.02%,平均精度均值提升了5.9%,可适用于X射线焊缝缺陷检测任务。