Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically fo...Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.展开更多
针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注...针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注意力机制提高对多尺度特征的提取能力,然后在主干网络中将部分卷积模块替换为深度可分离卷积,降低模型运算成本,采用Focal-EIOU(Focal and enhanced intersection over union)损失函数优化预测框,最后引入Mish激活函数增强网络的泛化能力,提高模型在复杂背景、目标部分被遮挡情况下的检测性能。实验结果表明,改进后的算法较原YOLOv7在准确率、召回率和平均精度均值上分别提高了5.2%、10.6%和5.2%,较其他主流算法在检测精度和模型体积上有着明显的优势,验证了改进方法的有效性,为复杂场景下机械外破隐患目标的边缘识别提供算法支持。展开更多
金属表面锈蚀的检测是激光智能清洗系统实现实时质量评估的关键技术,但传统视觉检测方法对小尺度锈蚀颗粒的识别能力不足。基于YOLO(You Only Look Once)算法,提出一种改进模型。该模型在骨干网络中嵌入卷积块注意力模块(Convolutional ...金属表面锈蚀的检测是激光智能清洗系统实现实时质量评估的关键技术,但传统视觉检测方法对小尺度锈蚀颗粒的识别能力不足。基于YOLO(You Only Look Once)算法,提出一种改进模型。该模型在骨干网络中嵌入卷积块注意力模块(Convolutional Block Attention Module,CBAM),增强复杂背景下的特征鉴别能力;设计基于部分卷积的跨阶段部分金字塔连接(Cross Stage Partial with Pyramid Concatenation,CSPPC)模块替代带聚焦机制的第二代跨阶段局部网络(Cross Stage Partial Network 2 with Focus,C2f)模块,减少了3.11%的参数量,计算量浮点数降低了6.64%;采用聚焦高效交并比(Focal and Efficient Intersection over Union,Focal-EIoU)损失函数,优化边界框的回归过程,并有效缓解了正样本和负样本之间的不平衡状况。结果表明,该YOLOv8-CCF(YOLOv8-CBAM-CSPPC-Focal-EIoU)算法改进模型在自制数据集上,在95%交并比阈值下的平均精度均值(mean Average Precision at 95%Intersection over Union,mAP@95%)达到0.96902,较原模型提升了5.003%,参数量减少至21.3万,检测速度达500 f/s,显著改善了小目标漏检问题。该模型为金属表面锈蚀的实时检测与激光自动化除锈提供了有效解决方案。展开更多
棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络...棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络的特征提取能力并减少模型参数量;同时,将卷积注意力模块(convolution block attention module,CBAM)嵌入到模型的主干输出端,以增强模型对小目标的特征提取能力并削弱背景干扰;其次,使用GSConv卷积搭建Slim-Neck颈部网络结构,在减少模型参数量的同时保持模型的识别精度;最后,采用Focal-EIOU(focal and efficient IOU loss,Focal-EIOU)作为边界框回归损失函数,加速网络收敛并提高模型的检测准确率。结果表明,改进的ECSF-YOLOv7模型在棉田虫害测试集上的平均精度均值(mean average precision,mAP)为95.71%,检测速度为69.47帧/s。与主流的目标检测模型YOLOv7、SSD、YOLOv5l和YOLOX-m相比,ECSF-YOLOv7模型的mAP分别高出1.43、9.08、1.94、1.52个百分点,并且改进模型具有参数量更小、检测速度更快的优势,可为棉田虫害快速准确检测提供技术支持。展开更多
文摘Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.
文摘针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注意力机制提高对多尺度特征的提取能力,然后在主干网络中将部分卷积模块替换为深度可分离卷积,降低模型运算成本,采用Focal-EIOU(Focal and enhanced intersection over union)损失函数优化预测框,最后引入Mish激活函数增强网络的泛化能力,提高模型在复杂背景、目标部分被遮挡情况下的检测性能。实验结果表明,改进后的算法较原YOLOv7在准确率、召回率和平均精度均值上分别提高了5.2%、10.6%和5.2%,较其他主流算法在检测精度和模型体积上有着明显的优势,验证了改进方法的有效性,为复杂场景下机械外破隐患目标的边缘识别提供算法支持。
文摘金属表面锈蚀的检测是激光智能清洗系统实现实时质量评估的关键技术,但传统视觉检测方法对小尺度锈蚀颗粒的识别能力不足。基于YOLO(You Only Look Once)算法,提出一种改进模型。该模型在骨干网络中嵌入卷积块注意力模块(Convolutional Block Attention Module,CBAM),增强复杂背景下的特征鉴别能力;设计基于部分卷积的跨阶段部分金字塔连接(Cross Stage Partial with Pyramid Concatenation,CSPPC)模块替代带聚焦机制的第二代跨阶段局部网络(Cross Stage Partial Network 2 with Focus,C2f)模块,减少了3.11%的参数量,计算量浮点数降低了6.64%;采用聚焦高效交并比(Focal and Efficient Intersection over Union,Focal-EIoU)损失函数,优化边界框的回归过程,并有效缓解了正样本和负样本之间的不平衡状况。结果表明,该YOLOv8-CCF(YOLOv8-CBAM-CSPPC-Focal-EIoU)算法改进模型在自制数据集上,在95%交并比阈值下的平均精度均值(mean Average Precision at 95%Intersection over Union,mAP@95%)达到0.96902,较原模型提升了5.003%,参数量减少至21.3万,检测速度达500 f/s,显著改善了小目标漏检问题。该模型为金属表面锈蚀的实时检测与激光自动化除锈提供了有效解决方案。
文摘棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络的特征提取能力并减少模型参数量;同时,将卷积注意力模块(convolution block attention module,CBAM)嵌入到模型的主干输出端,以增强模型对小目标的特征提取能力并削弱背景干扰;其次,使用GSConv卷积搭建Slim-Neck颈部网络结构,在减少模型参数量的同时保持模型的识别精度;最后,采用Focal-EIOU(focal and efficient IOU loss,Focal-EIOU)作为边界框回归损失函数,加速网络收敛并提高模型的检测准确率。结果表明,改进的ECSF-YOLOv7模型在棉田虫害测试集上的平均精度均值(mean average precision,mAP)为95.71%,检测速度为69.47帧/s。与主流的目标检测模型YOLOv7、SSD、YOLOv5l和YOLOX-m相比,ECSF-YOLOv7模型的mAP分别高出1.43、9.08、1.94、1.52个百分点,并且改进模型具有参数量更小、检测速度更快的优势,可为棉田虫害快速准确检测提供技术支持。