To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-cap...To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.展开更多
针对小尺度目标在检测时精确率低且易出现漏检和误检等问题,提出一种改进的YOLOv3(You Only Look Once version 3)小目标检测算法。在网络结构方面,为提高基础网络的特征提取能力,使用DenseNet-121密集连接网络替换原Darknet-53网络作...针对小尺度目标在检测时精确率低且易出现漏检和误检等问题,提出一种改进的YOLOv3(You Only Look Once version 3)小目标检测算法。在网络结构方面,为提高基础网络的特征提取能力,使用DenseNet-121密集连接网络替换原Darknet-53网络作为其基础网络,同时修改卷积核尺寸,进一步降低特征图信息的损耗,并且为增强检测模型对小尺度目标的鲁棒性,额外增加第4个尺寸为104×104像素的特征检测层;在对特征图融合操作方面,使用双线性插值法进行上采样操作代替原最近邻插值法上采样操作,解决大部分检测算法中存在的特征严重损失问题;在损失函数方面,使用广义交并比(GIoU)代替交并比(IoU)来计算边界框的损失值,同时引入Focal Loss焦点损失函数作为边界框的置信度损失函数。实验结果表明,改进算法在VisDrone2019数据集上的均值平均精度(mAP)为63.3%,较原始YOLOv3检测模型提高了13.2百分点,并且在GTX 1080 Ti设备上可实现52帧/s的检测速度,对小目标有着较好的检测性能。展开更多
Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means...Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means of reducing the risk of accidents.To tackle the challenges posed by the low recognition accuracy and the substan-tial computational burden associated with current infrared pedestrian-vehicle detection methods,an infrared pedestrian-vehicle detection method A proposal is presented,based on an enhanced version of You Only Look Once version 5(YOLOv5).First,A head specifically designed for detecting small targets has been integrated into the model to make full use of shallow feature information to enhance the accuracy in detecting small targets.Second,the Focal Generalized Intersection over Union(GIoU)is employed as an alternative to the original loss function to address issues related to target overlap and category imbalance.Third,the distribution shift convolution optimization feature extraction operator is used to alleviate the computational burden of the model without significantly compromising detection accuracy.The test results of the improved algorithm show that its average accuracy(mAP)reaches 90.1%.Specifically,the Giga Floating Point Operations Per second(GFLOPs)of the improved algorithm is only 9.1.In contrast,the improved algorithms outperformed the other algorithms on similar GFLOPs,such as YOLOv6n(11.9),YOLOv8n(8.7),YOLOv7t(13.2)and YOLOv5s(16.0).The mAPs that are 4.4%,3%,3.5%,and 1.7%greater than those of these algorithms show that the improved algorithm achieves higher accuracy in target detection tasks under similar computational resource overhead.On the other hand,compared with other algorithms such as YOLOv8l(91.1%),YOLOv6l(89.5%),YOLOv7(90.8%),and YOLOv3(90.1%),the improved algorithm needs only 5.5%,2.3%,8.6%,and 2.3%,respectively,of the GFLOPs.The improved algorithm has shown significant advancements in balancing accuracy and computational efficiency,making it promising for practical use in resource-limited scenarios.展开更多
The contact network dropper works in a harsh environment,and suffers from the impact effect of pantographs during running of trains,which may lead to faults such as slack and broken of the dropper wire and broken of t...The contact network dropper works in a harsh environment,and suffers from the impact effect of pantographs during running of trains,which may lead to faults such as slack and broken of the dropper wire and broken of the current-carrying ring.Due to the low intelligence and poor accuracy of the dropper fault detection network,an improved fully convolutional one-stage(FCOS)object detection network was proposed to improve the detection capability of the dropper condition.Firstly,by adjusting the parameterαin the network focus loss function,the problem of positive and negative sample imbalance in the network training process was eliminated.Secondly,the generalized intersection over union(GIoU)calculation was introduced to enhance the network’s ability to recognize the relative spatial positions of the prediction box and the bounding box during the regression calculation.Finally,the improved network was used to detect the status of dropper pictures.The detection speed was 150 sheets per millisecond,and the MAP of different status detection was 0.9512.Through the simulation comparison with other object detection networks,it was proved that the improved FCOS network had advantages in both detection time and accuracy,and could identify the state of dropper accurately.展开更多
基金supported by the Shanghai Science and Technology Innovation Action Plan High-Tech Field Project(Grant No.22511100601)for the year 2022 and Technology Development Fund for People’s Livelihood Research(Research on Transmission Line Deep Foundation Pit Environmental Situation Awareness System Based on Multi-Source Data).
文摘To maintain the reliability of power systems,routine inspections using drones equipped with advanced object detection algorithms are essential for preempting power-related issues.The increasing resolution of drone-captured images has posed a challenge for traditional target detection methods,especially in identifying small objects in high-resolution images.This study presents an enhanced object detection algorithm based on the Faster Regionbased Convolutional Neural Network(Faster R-CNN)framework,specifically tailored for detecting small-scale electrical components like insulators,shock hammers,and screws in transmission line.The algorithm features an improved backbone network for Faster R-CNN,which significantly boosts the feature extraction network’s ability to detect fine details.The Region Proposal Network is optimized using a method of guided feature refinement(GFR),which achieves a balance between accuracy and speed.The incorporation of Generalized Intersection over Union(GIOU)and Region of Interest(ROI)Align further refines themodel’s accuracy.Experimental results demonstrate a notable improvement in mean Average Precision,reaching 89.3%,an 11.1%increase compared to the standard Faster R-CNN.This highlights the effectiveness of the proposed algorithm in identifying electrical components in high-resolution aerial images.
文摘针对小尺度目标在检测时精确率低且易出现漏检和误检等问题,提出一种改进的YOLOv3(You Only Look Once version 3)小目标检测算法。在网络结构方面,为提高基础网络的特征提取能力,使用DenseNet-121密集连接网络替换原Darknet-53网络作为其基础网络,同时修改卷积核尺寸,进一步降低特征图信息的损耗,并且为增强检测模型对小尺度目标的鲁棒性,额外增加第4个尺寸为104×104像素的特征检测层;在对特征图融合操作方面,使用双线性插值法进行上采样操作代替原最近邻插值法上采样操作,解决大部分检测算法中存在的特征严重损失问题;在损失函数方面,使用广义交并比(GIoU)代替交并比(IoU)来计算边界框的损失值,同时引入Focal Loss焦点损失函数作为边界框的置信度损失函数。实验结果表明,改进算法在VisDrone2019数据集上的均值平均精度(mAP)为63.3%,较原始YOLOv3检测模型提高了13.2百分点,并且在GTX 1080 Ti设备上可实现52帧/s的检测速度,对小目标有着较好的检测性能。
文摘Road traffic safety can decrease when drivers drive in a low-visibility environment.The application of visual perception technology to detect vehicles and pedestrians in infrared images proves to be an effective means of reducing the risk of accidents.To tackle the challenges posed by the low recognition accuracy and the substan-tial computational burden associated with current infrared pedestrian-vehicle detection methods,an infrared pedestrian-vehicle detection method A proposal is presented,based on an enhanced version of You Only Look Once version 5(YOLOv5).First,A head specifically designed for detecting small targets has been integrated into the model to make full use of shallow feature information to enhance the accuracy in detecting small targets.Second,the Focal Generalized Intersection over Union(GIoU)is employed as an alternative to the original loss function to address issues related to target overlap and category imbalance.Third,the distribution shift convolution optimization feature extraction operator is used to alleviate the computational burden of the model without significantly compromising detection accuracy.The test results of the improved algorithm show that its average accuracy(mAP)reaches 90.1%.Specifically,the Giga Floating Point Operations Per second(GFLOPs)of the improved algorithm is only 9.1.In contrast,the improved algorithms outperformed the other algorithms on similar GFLOPs,such as YOLOv6n(11.9),YOLOv8n(8.7),YOLOv7t(13.2)and YOLOv5s(16.0).The mAPs that are 4.4%,3%,3.5%,and 1.7%greater than those of these algorithms show that the improved algorithm achieves higher accuracy in target detection tasks under similar computational resource overhead.On the other hand,compared with other algorithms such as YOLOv8l(91.1%),YOLOv6l(89.5%),YOLOv7(90.8%),and YOLOv3(90.1%),the improved algorithm needs only 5.5%,2.3%,8.6%,and 2.3%,respectively,of the GFLOPs.The improved algorithm has shown significant advancements in balancing accuracy and computational efficiency,making it promising for practical use in resource-limited scenarios.
文摘针对船舶实时性检测中出现的检测精度低、漏检问题,改进一种基于YOLOv3-Tiny的船舶目标检测算法。通过引入深度可分离卷积作为主干网络,提高通道数量,减少模型的参数量和运算量;采用H-Swish和Leaky ReLU激活函数改进卷积结构,提取更多特征信息;利用GIOU(Generalized Intersection Over Union)损失优化边界框,突显目标区域重合度,提高精度。在混合船舶数据集上检测结果表明,改进后YOLOv3-Tiny的检测精度为83.40%,较原算法提高5.33百分点,召回率和检测速度也均优于原算法,适用于船舶实时性检测。
基金supported by Natural Science Foundation of Gansu Province(No.20JR10RA216)。
文摘The contact network dropper works in a harsh environment,and suffers from the impact effect of pantographs during running of trains,which may lead to faults such as slack and broken of the dropper wire and broken of the current-carrying ring.Due to the low intelligence and poor accuracy of the dropper fault detection network,an improved fully convolutional one-stage(FCOS)object detection network was proposed to improve the detection capability of the dropper condition.Firstly,by adjusting the parameterαin the network focus loss function,the problem of positive and negative sample imbalance in the network training process was eliminated.Secondly,the generalized intersection over union(GIoU)calculation was introduced to enhance the network’s ability to recognize the relative spatial positions of the prediction box and the bounding box during the regression calculation.Finally,the improved network was used to detect the status of dropper pictures.The detection speed was 150 sheets per millisecond,and the MAP of different status detection was 0.9512.Through the simulation comparison with other object detection networks,it was proved that the improved FCOS network had advantages in both detection time and accuracy,and could identify the state of dropper accurately.