针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注...针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注意力机制提高对多尺度特征的提取能力,然后在主干网络中将部分卷积模块替换为深度可分离卷积,降低模型运算成本,采用Focal-EIOU(Focal and enhanced intersection over union)损失函数优化预测框,最后引入Mish激活函数增强网络的泛化能力,提高模型在复杂背景、目标部分被遮挡情况下的检测性能。实验结果表明,改进后的算法较原YOLOv7在准确率、召回率和平均精度均值上分别提高了5.2%、10.6%和5.2%,较其他主流算法在检测精度和模型体积上有着明显的优势,验证了改进方法的有效性,为复杂场景下机械外破隐患目标的边缘识别提供算法支持。展开更多
With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class diff...With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.展开更多
文摘针对背景复杂、尺度变化较大、被遮挡情况下机械外破隐患目标检测精度不高,容易出现错检、漏检的问题,文中提出了一种改进YOLOv7(you only look once version 7)的机械外破隐患目标检测算法。文章在检测头网络中添加Swin Transformer注意力机制提高对多尺度特征的提取能力,然后在主干网络中将部分卷积模块替换为深度可分离卷积,降低模型运算成本,采用Focal-EIOU(Focal and enhanced intersection over union)损失函数优化预测框,最后引入Mish激活函数增强网络的泛化能力,提高模型在复杂背景、目标部分被遮挡情况下的检测性能。实验结果表明,改进后的算法较原YOLOv7在准确率、召回率和平均精度均值上分别提高了5.2%、10.6%和5.2%,较其他主流算法在检测精度和模型体积上有着明显的优势,验证了改进方法的有效性,为复杂场景下机械外破隐患目标的边缘识别提供算法支持。
基金supported,in part,by the National Nature Science Foundation of China under Grant Numbers 61502240,61502096,61304205,61773219in part,by the Natural Science Foundation of Jiangsu Province under Grant Numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘With the increasing application of surveillance cameras,vehicle re-identication(Re-ID)has attracted more attention in the eld of public security.Vehicle Re-ID meets challenge attributable to the large intra-class differences caused by different views of vehicles in the traveling process and obvious inter-class similarities caused by similar appearances.Plentiful existing methods focus on local attributes by marking local locations.However,these methods require additional annotations,resulting in complex algorithms and insufferable computation time.To cope with these challenges,this paper proposes a vehicle Re-ID model based on optimized DenseNet121 with joint loss.This model applies the SE block to automatically obtain the importance of each channel feature and assign the corresponding weight to it,then features are transferred to the deep layer by adjusting the corresponding weights,which reduces the transmission of redundant information in the process of feature reuse in DenseNet121.At the same time,the proposed model leverages the complementary expression advantages of middle features of the CNN to enhance the feature expression ability.Additionally,a joint loss with focal loss and triplet loss is proposed in vehicle Re-ID to enhance the model’s ability to discriminate difcult-to-separate samples by enlarging the weight of the difcult-to-separate samples during the training process.Experimental results on the VeRi-776 dataset show that mAP and Rank-1 reach 75.5%and 94.8%,respectively.Besides,Rank-1 on small,medium and large sub-datasets of Vehicle ID dataset reach 81.3%,78.9%,and 76.5%,respectively,which surpasses most existing vehicle Re-ID methods.