Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populati...Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.展开更多
Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically fo...Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.展开更多
文摘Bees play a crucial role in the global food chain,pollinating over 75% of food and producing valuable products such as bee pollen,propolis,and royal jelly.However,theAsian hornet poses a serious threat to bee populations by preying on them and disrupting agricultural ecosystems.To address this issue,this study developed a modified YOLOv7tiny(You Only Look Once)model for efficient hornet detection.The model incorporated space-to-depth(SPD)and squeeze-and-excitation(SE)attention mechanisms and involved detailed annotation of the hornet’s head and full body,significantly enhancing the detection of small objects.The Taguchi method was also used to optimize the training parameters,resulting in optimal performance.Data for this study were collected from the Roboflow platformusing a 640×640 resolution dataset.The YOLOv7tinymodel was trained on this dataset.After optimizing the training parameters using the Taguchi method,significant improvements were observed in accuracy,precision,recall,F1 score,andmean average precision(mAP)for hornet detection.Without the hornet head label,incorporating the SPD attentionmechanism resulted in a peakmAP of 98.7%,representing an 8.58%increase over the original YOLOv7tiny.By including the hornet head label and applying the SPD attention mechanism and Soft-CIOU loss function,themAP was further enhanced to 97.3%,a 7.04% increase over the original YOLOv7tiny.Furthermore,the Soft-CIOU Loss function contributed to additional performance enhancements during the validation phase.
文摘Purpose:The purpose of this work is to present an approach for autonomous detection of eye disease in fundus images.Furthermore,this work presents an improved variant of the Tiny YOLOv7 model developed specifically for eye disease detection.The model proposed in this work is a highly useful tool for the development of applications for autonomous detection of eye diseases in fundus images that can help and assist ophthalmologists.Design/methodology/approach:The approach adopted to carry out this work is twofold.Firstly,a richly annotated dataset consisting of eye disease classes,namely,cataract,glaucoma,retinal disease and normal eye,was created.Secondly,an improved variant of the Tiny YOLOv7 model was developed and proposed as EYE-YOLO.The proposed EYE-YOLO model has been developed by integrating multi-spatial pyramid pooling in the feature extraction network and Focal-EIOU loss in the detection network of the Tiny YOLOv7 model.Moreover,at run time,the mosaic augmentation strategy has been utilized with the proposed model to achieve benchmark results.Further,evaluations have been carried out for performance metrics,namely,precision,recall,F1 Score,average precision(AP)and mean average precision(mAP).Findings:The proposed EYE-YOLO achieved 28%higher precision,18%higher recall,24%higher F1 Score and 30.81%higher mAP than the Tiny YOLOv7 model.Moreover,in terms of AP for each class of the employed dataset,it achieved 9.74%higher AP for cataract,27.73%higher AP for glaucoma,72.50%higher AP for retina disease and 13.26%higher AP for normal eye.In comparison to the state-of-the-art Tiny YOLOv5,Tiny YOLOv6 and Tiny YOLOv8 models,the proposed EYE-YOLO achieved 6:23.32%higher mAP.Originality/value:This work addresses the problem of eye disease recognition as a bounding box regression and detection problem.Whereas,the work in the related research is largely based on eye disease classification.The other highlight of this work is to propose a richly annotated dataset for different eye diseases useful for training deep learning-based object detectors.The major highlight of this work lies in the proposal of an improved variant of the Tiny YOLOv7 model focusing on eye disease detection.The proposed modifications in the Tiny YOLOv7 aided the proposed model in achieving better results as compared to the state-of-the-art Tiny YOLOv8 and YOLOv8 Nano.