In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,par...In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,particularly in snowy environments,remains a challenge.Snow-covered roads introduce unpredictable surface conditions,occlusions,and reduced visibility,that require robust and adaptive path detection algorithms.This paper presents an enhanced road detection framework for snowy environments,leveraging Simple Framework forContrastive Learning of Visual Representations(SimCLR)for Self-Supervised pretraining,hyperparameter optimization,and uncertainty-aware object detection to improve the performance of YouOnly Look Once version 8(YOLOv8).Themodel is trained and evaluated on a custom-built dataset collected from snowy roads in Tromsø,Norway,which covers a range of snow textures,illumination conditions,and road geometries.The proposed framework achieves scores in terms of mAP@50 equal to 99%and mAP@50–95 equal to 97%,demonstrating the effectiveness of YOLOv8 for real-time road detection in extreme winter conditions.The findings contribute to the safe and reliable deployment of autonomous vehicles in Arctic environments,enabling robust decision-making in hazardous weather conditions.This research lays the groundwork for more resilient perceptionmodels in self-driving systems,paving the way for the future development of intelligent and adaptive transportation networks.展开更多
Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SG...Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SGW-YOLOv8 based on the improvement of YOLOv8.First,the Adaptive Fine-Grained Channel Attention(FCA)module is fused with the backbone layer of the YOLOv8 network to improve the feature extraction ability of the model.This paper uses the YOLOv8 network backbone layer to improve the feature extraction capability of the model.Second,the Efficient Multi-Scale Attention(C2f_EMA)module is replaced with the C2f module in the Neck layer of the network to improve the detection performance of the model for small underwater targets.Finally,the loss function is optimized to Weighted Intersection over Union(WIoU)to replace the original loss function,so that the model is better adapted to the target detection task in the complex ocean background.The improved algorithm has been experimented with on the Underwater Robot Picking Contest(URPC)dataset,and the results show that the improved algorithm achieves a detection accuracy of 84.5,which is 2.3%higher than that before the improvement,and at the same time,it can accurately detect the small-target marine organisms and adapts to the task of detecting marine organisms in various complex environments.展开更多
In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm...In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm based on YOLOv8 was proposed in this study.To begin with,the CoordAtt attention mechanism was employed to enhance the feature extraction capability of the backbone network,thereby reducing interference from backgrounds.Additionally,the BiFPN feature fusion network with an added small object detection layer was used to enhance the model's ability to perceive for small objects.Furthermore,a multi-level fusion module was designed and proposed to effectively integrate shallow and deep information.The use of an enhanced MPDIoU loss function further improved detection performance.The experimental results based on the publicly available VisDrone2019 dataset showed that the improved model outperformed the YOLOv8 baseline model,mAP@0.5 improved by 20%,and the improved method improved the detection accuracy of the model for small targets.展开更多
UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,comp...UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.展开更多
With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and acc...With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes.展开更多
为精准识别与分类不同花期杭白菊,满足自动化采摘要求,该研究提出一种基于改进YOLOv8s的杭白菊检测模型-YOLOv8s-RDL。首先,该研究将颈部网络(neck)的C2f(faster implementation of CSP bottleneck with 2 convolutions)模块替换为RCS-O...为精准识别与分类不同花期杭白菊,满足自动化采摘要求,该研究提出一种基于改进YOLOv8s的杭白菊检测模型-YOLOv8s-RDL。首先,该研究将颈部网络(neck)的C2f(faster implementation of CSP bottleneck with 2 convolutions)模块替换为RCS-OSA(one-shot aggregation of reparameterized convolution based on channel shuffle)模块,以提升骨干网络(backbone)特征融合效率;其次,将检测头更换为DyHead(dynamic head),并融合DCNv3(deformable convolutional networks v3),借助多头自注意力机制增强目标检测头的表达能力;最后,采用LAMP(layer-adaptive magnitude-based pruning)通道剪枝算法减少参数量,降低模型复杂度。试验结果表明,YOLOv8s-RDL模型在菊米和胎菊的花期分类中平均精度分别达到96.3%和97.7%,相较于YOLOv8s模型,分别提升了3.8和1.5个百分点,同时权重文件大小较YOLOv8s减小了6 MB。该研究引入TIDE(toolkit for identifying detection and segmentation errors)评估指标,结果显示,YOLOv8s-RDL模型分类错误和背景检测错误相较YOLOv8s模型分别降低0.55和1.26。该研究为杭白菊分花期自动化采摘提供了理论依据和技术支撑。展开更多
在林业管理中,及时发现火灾并识别其规模对于安全防护和治理火灾至关重要。针对现有火灾检测算法存在的精度低、漏检误检和实时性不足等问题,提出一种无人机航拍图像下火灾实时检测算法——MDSYOLOv8。以YOLOv8为基线算法,将骨干网络第...在林业管理中,及时发现火灾并识别其规模对于安全防护和治理火灾至关重要。针对现有火灾检测算法存在的精度低、漏检误检和实时性不足等问题,提出一种无人机航拍图像下火灾实时检测算法——MDSYOLOv8。以YOLOv8为基线算法,将骨干网络第7层卷积模块和颈部网络卷积模块替换成动态蛇形卷积(DSConv),提高算法的特征提取性能,并强化算法对微小特征的学习能力;然后在颈部与检测头之间添加多维协作注意力机制(MCA),加强颈部特征融合,增强算法对小目标的检测能力,并抑制无关背景信息;最后使用SIoU损失函数替换原YOLOv8中的CIoU损失函数,加快算法的收敛速度和回归精度。实验结果表明,MDSYOLOv8在公开数据集KMU上对烟雾目标的检测精度mAP达到95.89%,相较于基线YOLOv8提高了3.33个百分点,具有卓越的检测性能。此外,本研究采集互联网上的无人机航拍火灾图像制作UFF(UAV field fire)数据集,主要对象为火焰和烟雾,包含森林和城市等火灾隐患可能发生场景。在自制数据集UFF上进行深度实验分析,MDSYOLOv8的检测精度达到93.98%,检测速度为54帧/s,并且能同时识别烟雾和火焰两种火灾场景中的主要目标,与主流目标检测方法相比,在检测精度和效率方面均展现出明显优势,更加契合航拍场景下的火灾检测应用。展开更多
文摘In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,particularly in snowy environments,remains a challenge.Snow-covered roads introduce unpredictable surface conditions,occlusions,and reduced visibility,that require robust and adaptive path detection algorithms.This paper presents an enhanced road detection framework for snowy environments,leveraging Simple Framework forContrastive Learning of Visual Representations(SimCLR)for Self-Supervised pretraining,hyperparameter optimization,and uncertainty-aware object detection to improve the performance of YouOnly Look Once version 8(YOLOv8).Themodel is trained and evaluated on a custom-built dataset collected from snowy roads in Tromsø,Norway,which covers a range of snow textures,illumination conditions,and road geometries.The proposed framework achieves scores in terms of mAP@50 equal to 99%and mAP@50–95 equal to 97%,demonstrating the effectiveness of YOLOv8 for real-time road detection in extreme winter conditions.The findings contribute to the safe and reliable deployment of autonomous vehicles in Arctic environments,enabling robust decision-making in hazardous weather conditions.This research lays the groundwork for more resilient perceptionmodels in self-driving systems,paving the way for the future development of intelligent and adaptive transportation networks.
基金supported by 2023IT020 of the Industry-University-Research Innovation Fund for Chinese Universities-New Generation Information Technology Innovation ProgramPX-972024121 of the Education&Teaching Reform Program of Guangdong Ocean University。
文摘Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SGW-YOLOv8 based on the improvement of YOLOv8.First,the Adaptive Fine-Grained Channel Attention(FCA)module is fused with the backbone layer of the YOLOv8 network to improve the feature extraction ability of the model.This paper uses the YOLOv8 network backbone layer to improve the feature extraction capability of the model.Second,the Efficient Multi-Scale Attention(C2f_EMA)module is replaced with the C2f module in the Neck layer of the network to improve the detection performance of the model for small underwater targets.Finally,the loss function is optimized to Weighted Intersection over Union(WIoU)to replace the original loss function,so that the model is better adapted to the target detection task in the complex ocean background.The improved algorithm has been experimented with on the Underwater Robot Picking Contest(URPC)dataset,and the results show that the improved algorithm achieves a detection accuracy of 84.5,which is 2.3%higher than that before the improvement,and at the same time,it can accurately detect the small-target marine organisms and adapts to the task of detecting marine organisms in various complex environments.
文摘In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm based on YOLOv8 was proposed in this study.To begin with,the CoordAtt attention mechanism was employed to enhance the feature extraction capability of the backbone network,thereby reducing interference from backgrounds.Additionally,the BiFPN feature fusion network with an added small object detection layer was used to enhance the model's ability to perceive for small objects.Furthermore,a multi-level fusion module was designed and proposed to effectively integrate shallow and deep information.The use of an enhanced MPDIoU loss function further improved detection performance.The experimental results based on the publicly available VisDrone2019 dataset showed that the improved model outperformed the YOLOv8 baseline model,mAP@0.5 improved by 20%,and the improved method improved the detection accuracy of the model for small targets.
基金supported by the National Natural Science Foundation of China(Grant Nos.62101275 and 62101274).
文摘UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.
文摘With the gradual development of automatic driving technology,people’s attention is no longer limited to daily automatic driving target detection.In response to the problem that it is difficult to achieve fast and accurate detection of visual targets in complex scenes of automatic driving at night,a detection algorithm based on improved YOLOv8s was proposed.Firsly,By adding Triplet Attention module into the lower sampling layer of the original model,the model can effectively retain and enhance feature information related to target detection on the lower-resolution feature map.This enhancement improved the robustness of the target detection network and reduced instances of missed detections.Secondly,the Soft-NMS algorithm was introduced to address the challenges of dealing with dense targets,overlapping objects,and complex scenes.This algorithm effectively reduced false and missed positives,thereby improved overall detection performance when faced with highly overlapping detection results.Finally,the experimental results on the MPDIoU loss function dataset showed that compared with the original model,the improved method,in which mAP and accuracy are increased by 2.9%and 2.8%respectively,can achieve better detection accuracy and speed in night vehicle detection.It can effectively improve the problem of target detection in night scenes.
文摘为精准识别与分类不同花期杭白菊,满足自动化采摘要求,该研究提出一种基于改进YOLOv8s的杭白菊检测模型-YOLOv8s-RDL。首先,该研究将颈部网络(neck)的C2f(faster implementation of CSP bottleneck with 2 convolutions)模块替换为RCS-OSA(one-shot aggregation of reparameterized convolution based on channel shuffle)模块,以提升骨干网络(backbone)特征融合效率;其次,将检测头更换为DyHead(dynamic head),并融合DCNv3(deformable convolutional networks v3),借助多头自注意力机制增强目标检测头的表达能力;最后,采用LAMP(layer-adaptive magnitude-based pruning)通道剪枝算法减少参数量,降低模型复杂度。试验结果表明,YOLOv8s-RDL模型在菊米和胎菊的花期分类中平均精度分别达到96.3%和97.7%,相较于YOLOv8s模型,分别提升了3.8和1.5个百分点,同时权重文件大小较YOLOv8s减小了6 MB。该研究引入TIDE(toolkit for identifying detection and segmentation errors)评估指标,结果显示,YOLOv8s-RDL模型分类错误和背景检测错误相较YOLOv8s模型分别降低0.55和1.26。该研究为杭白菊分花期自动化采摘提供了理论依据和技术支撑。
文摘在林业管理中,及时发现火灾并识别其规模对于安全防护和治理火灾至关重要。针对现有火灾检测算法存在的精度低、漏检误检和实时性不足等问题,提出一种无人机航拍图像下火灾实时检测算法——MDSYOLOv8。以YOLOv8为基线算法,将骨干网络第7层卷积模块和颈部网络卷积模块替换成动态蛇形卷积(DSConv),提高算法的特征提取性能,并强化算法对微小特征的学习能力;然后在颈部与检测头之间添加多维协作注意力机制(MCA),加强颈部特征融合,增强算法对小目标的检测能力,并抑制无关背景信息;最后使用SIoU损失函数替换原YOLOv8中的CIoU损失函数,加快算法的收敛速度和回归精度。实验结果表明,MDSYOLOv8在公开数据集KMU上对烟雾目标的检测精度mAP达到95.89%,相较于基线YOLOv8提高了3.33个百分点,具有卓越的检测性能。此外,本研究采集互联网上的无人机航拍火灾图像制作UFF(UAV field fire)数据集,主要对象为火焰和烟雾,包含森林和城市等火灾隐患可能发生场景。在自制数据集UFF上进行深度实验分析,MDSYOLOv8的检测精度达到93.98%,检测速度为54帧/s,并且能同时识别烟雾和火焰两种火灾场景中的主要目标,与主流目标检测方法相比,在检测精度和效率方面均展现出明显优势,更加契合航拍场景下的火灾检测应用。