针对现有算法模型在红外道路场景下,对小尺寸车辆与行人的检测存在精度低和漏检的问题,提出改进YOLOv5s的红外道路检测算法。首先,借鉴Focal-Loss的计算思想引入一种新的动态缩放(focal and complete IOU,Focal-CIoU)损失函数,提高检测...针对现有算法模型在红外道路场景下,对小尺寸车辆与行人的检测存在精度低和漏检的问题,提出改进YOLOv5s的红外道路检测算法。首先,借鉴Focal-Loss的计算思想引入一种新的动态缩放(focal and complete IOU,Focal-CIoU)损失函数,提高检测精度;其次,引入改进坐标信息嵌入中激活函数为自适应模式的坐标注意力机制(adaptive coordinate attention,Ada-CA),提高准确定位目标的能力;最后,改进C3模块为具有多尺度特征信息的MultiS-C3,提高模型识别能力。通过实验对比可知,改进的目标检测算法较原网络模型准确性提高2.0%,召回率提高7.3%、平均精度提高6.6%,可以有效检测出红外背景下的车辆与行人。展开更多
Tea,a globally cultivated crop renowned for its uniqueflavor profile and health-promoting properties,ranks among the most favored functional beverages worldwide.However,diseases severely jeopardize the production and qu...Tea,a globally cultivated crop renowned for its uniqueflavor profile and health-promoting properties,ranks among the most favored functional beverages worldwide.However,diseases severely jeopardize the production and quality of tea leaves,leading to significant economic losses.While early and accurate identification coupled with the removal of infected leaves can mitigate widespread infection,manual leaves removal remains time-con-suming and expensive.Utilizing robots for pruning can significantly enhance efficiency and reduce costs.How-ever,the accuracy of object detection directly impacts the overall efficiency of pruning robots.In complex tea plantation environments,complex image backgrounds,the overlapping and occlusion of leaves,as well as small and densely harmful leaves can all introduce interference factors.Existing algorithms perform poorly in detecting small and densely packed targets.To address these challenges,this paper collected a dataset of 1108 images of harmful tea leaves and proposed the YOLO-DBD model.The model excels in efficiently identifying harmful tea leaves with various poses in complex backgrounds,providing crucial guidance for the posture and obstacle avoidance of a robotic arm during the pruning process.The improvements proposed in this study encompass the Cross Stage Partial with Deformable Convolutional Networks v2(C2f-DCN)module,Bi-Level Routing Atten-tion(BRA),Dynamic Head(DyHead),and Focal Complete Intersection over Union(Focal-CIoU)Loss function,enhancing the model’s feature extraction,computation allocation,and perception capabilities.Compared to the baseline model YOLOv8s,mean Average Precision at IoU 0.5(mAP0.5)increased by 6%,and Floating Point Operations Per second(FLOPs)decreased by 3.3 G.展开更多
在自然场景中,天气情况、光照强度、背景干扰等问题影响火焰检测的准确性.为了实现复杂场景下实时准确的火焰检测,在目标检测网络YOLOv5的基础上,结合Focal Loss焦点损失函数、CIoU(Complete Intersection over Union)损失函数与多特征...在自然场景中,天气情况、光照强度、背景干扰等问题影响火焰检测的准确性.为了实现复杂场景下实时准确的火焰检测,在目标检测网络YOLOv5的基础上,结合Focal Loss焦点损失函数、CIoU(Complete Intersection over Union)损失函数与多特征融合,提出实时高效的火焰检测方法.为了缓解正负样本不均衡问题,并充分利用困难样本的信息,引入焦点损失函数,同时结合火焰静态特征和动态特征,设计多特征融合方法,达到剔除误报火焰的目的.针对国内外缺乏火焰数据集的问题,构建大规模、高质量的十万量级火焰数据集(http://www.yongxu.org/data bases.html).实验表明,文中方法在准确率、速度、精度和泛化能力等方面均有明显提升,同时降低误报率.展开更多
文摘Tea,a globally cultivated crop renowned for its uniqueflavor profile and health-promoting properties,ranks among the most favored functional beverages worldwide.However,diseases severely jeopardize the production and quality of tea leaves,leading to significant economic losses.While early and accurate identification coupled with the removal of infected leaves can mitigate widespread infection,manual leaves removal remains time-con-suming and expensive.Utilizing robots for pruning can significantly enhance efficiency and reduce costs.How-ever,the accuracy of object detection directly impacts the overall efficiency of pruning robots.In complex tea plantation environments,complex image backgrounds,the overlapping and occlusion of leaves,as well as small and densely harmful leaves can all introduce interference factors.Existing algorithms perform poorly in detecting small and densely packed targets.To address these challenges,this paper collected a dataset of 1108 images of harmful tea leaves and proposed the YOLO-DBD model.The model excels in efficiently identifying harmful tea leaves with various poses in complex backgrounds,providing crucial guidance for the posture and obstacle avoidance of a robotic arm during the pruning process.The improvements proposed in this study encompass the Cross Stage Partial with Deformable Convolutional Networks v2(C2f-DCN)module,Bi-Level Routing Atten-tion(BRA),Dynamic Head(DyHead),and Focal Complete Intersection over Union(Focal-CIoU)Loss function,enhancing the model’s feature extraction,computation allocation,and perception capabilities.Compared to the baseline model YOLOv8s,mean Average Precision at IoU 0.5(mAP0.5)increased by 6%,and Floating Point Operations Per second(FLOPs)decreased by 3.3 G.