In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,par...In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,particularly in snowy environments,remains a challenge.Snow-covered roads introduce unpredictable surface conditions,occlusions,and reduced visibility,that require robust and adaptive path detection algorithms.This paper presents an enhanced road detection framework for snowy environments,leveraging Simple Framework forContrastive Learning of Visual Representations(SimCLR)for Self-Supervised pretraining,hyperparameter optimization,and uncertainty-aware object detection to improve the performance of YouOnly Look Once version 8(YOLOv8).Themodel is trained and evaluated on a custom-built dataset collected from snowy roads in Tromsø,Norway,which covers a range of snow textures,illumination conditions,and road geometries.The proposed framework achieves scores in terms of mAP@50 equal to 99%and mAP@50–95 equal to 97%,demonstrating the effectiveness of YOLOv8 for real-time road detection in extreme winter conditions.The findings contribute to the safe and reliable deployment of autonomous vehicles in Arctic environments,enabling robust decision-making in hazardous weather conditions.This research lays the groundwork for more resilient perceptionmodels in self-driving systems,paving the way for the future development of intelligent and adaptive transportation networks.展开更多
Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SG...Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SGW-YOLOv8 based on the improvement of YOLOv8.First,the Adaptive Fine-Grained Channel Attention(FCA)module is fused with the backbone layer of the YOLOv8 network to improve the feature extraction ability of the model.This paper uses the YOLOv8 network backbone layer to improve the feature extraction capability of the model.Second,the Efficient Multi-Scale Attention(C2f_EMA)module is replaced with the C2f module in the Neck layer of the network to improve the detection performance of the model for small underwater targets.Finally,the loss function is optimized to Weighted Intersection over Union(WIoU)to replace the original loss function,so that the model is better adapted to the target detection task in the complex ocean background.The improved algorithm has been experimented with on the Underwater Robot Picking Contest(URPC)dataset,and the results show that the improved algorithm achieves a detection accuracy of 84.5,which is 2.3%higher than that before the improvement,and at the same time,it can accurately detect the small-target marine organisms and adapts to the task of detecting marine organisms in various complex environments.展开更多
In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm...In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm based on YOLOv8 was proposed in this study.To begin with,the CoordAtt attention mechanism was employed to enhance the feature extraction capability of the backbone network,thereby reducing interference from backgrounds.Additionally,the BiFPN feature fusion network with an added small object detection layer was used to enhance the model's ability to perceive for small objects.Furthermore,a multi-level fusion module was designed and proposed to effectively integrate shallow and deep information.The use of an enhanced MPDIoU loss function further improved detection performance.The experimental results based on the publicly available VisDrone2019 dataset showed that the improved model outperformed the YOLOv8 baseline model,mAP@0.5 improved by 20%,and the improved method improved the detection accuracy of the model for small targets.展开更多
UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,comp...UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.展开更多
文摘In recent years,advancements in autonomous vehicle technology have accelerated,promising safer and more efficient transportation systems.However,achieving fully autonomous driving in challenging weather conditions,particularly in snowy environments,remains a challenge.Snow-covered roads introduce unpredictable surface conditions,occlusions,and reduced visibility,that require robust and adaptive path detection algorithms.This paper presents an enhanced road detection framework for snowy environments,leveraging Simple Framework forContrastive Learning of Visual Representations(SimCLR)for Self-Supervised pretraining,hyperparameter optimization,and uncertainty-aware object detection to improve the performance of YouOnly Look Once version 8(YOLOv8).Themodel is trained and evaluated on a custom-built dataset collected from snowy roads in Tromsø,Norway,which covers a range of snow textures,illumination conditions,and road geometries.The proposed framework achieves scores in terms of mAP@50 equal to 99%and mAP@50–95 equal to 97%,demonstrating the effectiveness of YOLOv8 for real-time road detection in extreme winter conditions.The findings contribute to the safe and reliable deployment of autonomous vehicles in Arctic environments,enabling robust decision-making in hazardous weather conditions.This research lays the groundwork for more resilient perceptionmodels in self-driving systems,paving the way for the future development of intelligent and adaptive transportation networks.
基金supported by 2023IT020 of the Industry-University-Research Innovation Fund for Chinese Universities-New Generation Information Technology Innovation ProgramPX-972024121 of the Education&Teaching Reform Program of Guangdong Ocean University。
文摘Aiming at the problems of insufficient feature extraction ability for small targets,complex image background,and low detection accuracy in marine life detection,this paper proposes a marine life detection algorithm SGW-YOLOv8 based on the improvement of YOLOv8.First,the Adaptive Fine-Grained Channel Attention(FCA)module is fused with the backbone layer of the YOLOv8 network to improve the feature extraction ability of the model.This paper uses the YOLOv8 network backbone layer to improve the feature extraction capability of the model.Second,the Efficient Multi-Scale Attention(C2f_EMA)module is replaced with the C2f module in the Neck layer of the network to improve the detection performance of the model for small underwater targets.Finally,the loss function is optimized to Weighted Intersection over Union(WIoU)to replace the original loss function,so that the model is better adapted to the target detection task in the complex ocean background.The improved algorithm has been experimented with on the Underwater Robot Picking Contest(URPC)dataset,and the results show that the improved algorithm achieves a detection accuracy of 84.5,which is 2.3%higher than that before the improvement,and at the same time,it can accurately detect the small-target marine organisms and adapts to the task of detecting marine organisms in various complex environments.
文摘In response to the challenge of low detection accuracy and susceptibility to missed and false detections of small targets in unmanned aerial vehicles(UAVs)aerial images,an improved UAV image target detection algorithm based on YOLOv8 was proposed in this study.To begin with,the CoordAtt attention mechanism was employed to enhance the feature extraction capability of the backbone network,thereby reducing interference from backgrounds.Additionally,the BiFPN feature fusion network with an added small object detection layer was used to enhance the model's ability to perceive for small objects.Furthermore,a multi-level fusion module was designed and proposed to effectively integrate shallow and deep information.The use of an enhanced MPDIoU loss function further improved detection performance.The experimental results based on the publicly available VisDrone2019 dataset showed that the improved model outperformed the YOLOv8 baseline model,mAP@0.5 improved by 20%,and the improved method improved the detection accuracy of the model for small targets.
基金supported by the National Natural Science Foundation of China(Grant Nos.62101275 and 62101274).
文摘UAV-based object detection is rapidly expanding in both civilian and military applications,including security surveillance,disaster assessment,and border patrol.However,challenges such as small objects,occlusions,complex backgrounds,and variable lighting persist due to the unique perspective of UAV imagery.To address these issues,this paper introduces DAFPN-YOLO,an innovative model based on YOLOv8s(You Only Look Once version 8s).Themodel strikes a balance between detection accuracy and speed while reducing parameters,making itwell-suited for multi-object detection tasks from drone perspectives.A key feature of DAFPN-YOLO is the enhanced Drone-AFPN(Adaptive Feature Pyramid Network),which adaptively fuses multi-scale features to optimize feature extraction and enhance spatial and small-object information.To leverage Drone-AFPN’smulti-scale capabilities fully,a dedicated 160×160 small-object detection head was added,significantly boosting detection accuracy for small targets.In the backbone,the C2f_Dual(Cross Stage Partial with Cross-Stage Feature Fusion Dual)module and SPPELAN(Spatial Pyramid Pooling with Enhanced LocalAttentionNetwork)modulewere integrated.These components improve feature extraction and information aggregationwhile reducing parameters and computational complexity,enhancing inference efficiency.Additionally,Shape-IoU(Shape Intersection over Union)is used as the loss function for bounding box regression,enabling more precise shape-based object matching.Experimental results on the VisDrone 2019 dataset demonstrate the effectiveness ofDAFPN-YOLO.Compared to YOLOv8s,the proposedmodel achieves a 5.4 percentage point increase inmAP@0.5,a 3.8 percentage point improvement in mAP@0.5:0.95,and a 17.2%reduction in parameter count.These results highlight DAFPN-YOLO’s advantages in UAV-based object detection,offering valuable insights for applying deep learning to UAV-specific multi-object detection tasks.