Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically fo...Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.展开更多
棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络...棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络的特征提取能力并减少模型参数量;同时,将卷积注意力模块(convolution block attention module,CBAM)嵌入到模型的主干输出端,以增强模型对小目标的特征提取能力并削弱背景干扰;其次,使用GSConv卷积搭建Slim-Neck颈部网络结构,在减少模型参数量的同时保持模型的识别精度;最后,采用Focal-EIOU(focal and efficient IOU loss,Focal-EIOU)作为边界框回归损失函数,加速网络收敛并提高模型的检测准确率。结果表明,改进的ECSF-YOLOv7模型在棉田虫害测试集上的平均精度均值(mean average precision,mAP)为95.71%,检测速度为69.47帧/s。与主流的目标检测模型YOLOv7、SSD、YOLOv5l和YOLOX-m相比,ECSF-YOLOv7模型的mAP分别高出1.43、9.08、1.94、1.52个百分点,并且改进模型具有参数量更小、检测速度更快的优势,可为棉田虫害快速准确检测提供技术支持。展开更多
【目的】针对城市复杂环境下的车辆难识别问题,提出了基于YOLOv8n(you only look once version 8n)的改进模型DB-YOLOv8n(deformable block YOLOv8n)。【方法】首先在颈部网络融合通道注意力机制(efficient channel attention,ECA)和改...【目的】针对城市复杂环境下的车辆难识别问题,提出了基于YOLOv8n(you only look once version 8n)的改进模型DB-YOLOv8n(deformable block YOLOv8n)。【方法】首先在颈部网络融合通道注意力机制(efficient channel attention,ECA)和改进加权双向特征金字塔网络(bidirectional feature pyramid network,BiFPN),以增强在昏暗光线下的车辆检测性能及对多尺度图像的处理能力,特别是对远处或部分遮挡的车辆;其次在主干网络引入可变型卷积(deformable convolutional networks,DCN),以增强模型对不同尺寸车辆的适应性;最后使用精确边界框回归的高效交并比损失函数(focal and efficient intersection over union loss,Focal-EIOU loss)替换高效交并比(efficient intersection over union,EIOU),进一步提升模型的稳定性。【结果】DB-YOLOv8n在自制车辆数据集上相比YOLOv8n,平均精度、精度和召回率分别提高了3.2%、3%和2%。【结论】本研究结果能为提高车辆检测的精确度提供理论参考。展开更多
文摘Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.
文摘棉田虫害的快速检测与准确识别是预防棉田虫害、提高棉花品质的重要前提。针对真实棉田环境下昆虫相似度高、背景干扰严重的问题,该研究提出一种ECSF-YOLOv7棉田虫害检测模型。首先,采用EfficientFormerV2作为特征提取网络,以加强网络的特征提取能力并减少模型参数量;同时,将卷积注意力模块(convolution block attention module,CBAM)嵌入到模型的主干输出端,以增强模型对小目标的特征提取能力并削弱背景干扰;其次,使用GSConv卷积搭建Slim-Neck颈部网络结构,在减少模型参数量的同时保持模型的识别精度;最后,采用Focal-EIOU(focal and efficient IOU loss,Focal-EIOU)作为边界框回归损失函数,加速网络收敛并提高模型的检测准确率。结果表明,改进的ECSF-YOLOv7模型在棉田虫害测试集上的平均精度均值(mean average precision,mAP)为95.71%,检测速度为69.47帧/s。与主流的目标检测模型YOLOv7、SSD、YOLOv5l和YOLOX-m相比,ECSF-YOLOv7模型的mAP分别高出1.43、9.08、1.94、1.52个百分点,并且改进模型具有参数量更小、检测速度更快的优势,可为棉田虫害快速准确检测提供技术支持。
文摘【目的】针对城市复杂环境下的车辆难识别问题,提出了基于YOLOv8n(you only look once version 8n)的改进模型DB-YOLOv8n(deformable block YOLOv8n)。【方法】首先在颈部网络融合通道注意力机制(efficient channel attention,ECA)和改进加权双向特征金字塔网络(bidirectional feature pyramid network,BiFPN),以增强在昏暗光线下的车辆检测性能及对多尺度图像的处理能力,特别是对远处或部分遮挡的车辆;其次在主干网络引入可变型卷积(deformable convolutional networks,DCN),以增强模型对不同尺寸车辆的适应性;最后使用精确边界框回归的高效交并比损失函数(focal and efficient intersection over union loss,Focal-EIOU loss)替换高效交并比(efficient intersection over union,EIOU),进一步提升模型的稳定性。【结果】DB-YOLOv8n在自制车辆数据集上相比YOLOv8n,平均精度、精度和召回率分别提高了3.2%、3%和2%。【结论】本研究结果能为提高车辆检测的精确度提供理论参考。