交通标志检测是自动驾驶系统、辅助驾驶系统(DAS)的重要组成部分,对行车安全具有重要意义。针对小目标交通标志检测时受光照、恶劣天气等因素影响而导致的检测精度低、漏检率高等问题,提出一种基于改进YOLOv5的小目标交通标志检测算法...交通标志检测是自动驾驶系统、辅助驾驶系统(DAS)的重要组成部分,对行车安全具有重要意义。针对小目标交通标志检测时受光照、恶劣天气等因素影响而导致的检测精度低、漏检率高等问题,提出一种基于改进YOLOv5的小目标交通标志检测算法。首先,引入空间到深度卷积(SPD-Conv)对特征图进行下采样,有效避免小目标信息丢失,提高小目标敏感度。其次,基于加权双向特征金字塔网络(BiFPN)改进颈部网络,添加跨层连接以融合多尺度特征。之后,增加小目标检测层,增强小目标检测能力。最后,采用SIoU(Shape-aware Intersection over Union)损失函数,关注真实框与预测框的角度信息。实验结果表明,改进后的算法在中国交通标志检测数据集(CCTSDB2021)上的平均精度均值(mAP)达到83.5%,相较于原YOLOv5提升了7.2个百分点,检测速度满足实时性要求。展开更多
In recent years,the number of patientswith colon disease has increased significantly.Colon polyps are the precursor lesions of colon cancer.If not diagnosed in time,they can easily develop into colon cancer,posing a s...In recent years,the number of patientswith colon disease has increased significantly.Colon polyps are the precursor lesions of colon cancer.If not diagnosed in time,they can easily develop into colon cancer,posing a serious threat to patients’lives and health.A colonoscopy is an important means of detecting colon polyps.However,in polyp imaging,due to the large differences and diverse types of polyps in size,shape,color,etc.,traditional detection methods face the problem of high false positive rates,which creates problems for doctors during the diagnosis process.In order to improve the accuracy and efficiency of colon polyp detection,this question proposes a network model suitable for colon polyp detection(PD-YOLO).This method introduces the self-attention mechanism CBAM(Convolutional Block Attention Module)in the backbone layer based on YOLOv7,allowing themodel to adaptively focus on key information and ignore the unimportant parts.To help themodel do a better job of polyp localization and bounding box regression,add the SPD-Conv(Symmetric Positive Definite Convolution)module to the neck layer and use deconvolution instead of upsampling.Theexperimental results indicate that the PD-YOLO algorithm demonstrates strong robustness in colon polyp detection.Compared to the original YOLOv7,on the Kvasir-SEG dataset,PD-YOLO has shown an increase of 5.44 percentage points in AP@0.5,showcasing significant advantages over other mainstream methods.展开更多
In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low...In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low resolution,and significant species variation across different growth stages.To improve the model’s ability to generalize and its robustness,we incorporate target background analysis,data augmentation,and processing techniques like Gaussian noise and brightness adjustment.In target detection,increasing the depth of the neural network can lead to the loss of small target information.To overcome this,we introduce the Space-to-Depth Convolution(SPD-Conv)module into the SPD-YOLOv7 framework,replacing certain convolutional layers in the traditional system backbone and head network.This modification helps retain small target features and location information.Additionally,the Efficient Layer Aggregation Network-Wide(ELAN-W)module is combined with the Convolutional Block Attention Module(CBAM)attention mechanism to extract more efficient features.Experimental results show that the enhanced YOLOv7 model achieves an accuracy of 98.38%,with an average accuracy of 99.4%,outperforming the original YOLOv7 model.These improvements represent an increase of 2.46%in accuracy and 3.19%in average accuracy.The results indicate that the enhanced YOLOv7 model is more efficient and real-time,offering valuable insights for maize pest control.展开更多
针对自动驾驶场景动态目标检测存在检测速度难以满足实时性要求、检测目标小或被遮挡造成的精度不足和误检、漏检率高等问题,提出一种基于改进YOLOv8模型的行人及车辆检测方法。首先,在Backbone骨干网络提取图像特征时使用对图像分辨率...针对自动驾驶场景动态目标检测存在检测速度难以满足实时性要求、检测目标小或被遮挡造成的精度不足和误检、漏检率高等问题,提出一种基于改进YOLOv8模型的行人及车辆检测方法。首先,在Backbone骨干网络提取图像特征时使用对图像分辨率低、小目标检测友好的空间到深度卷积(a Space-to-Depth layer followed by a non-strided Convolution,SPD-Conv)模块;其次,在Neck层融合特征时增加上下文转换自注意力(Contextual Transformer,CoT)模块提高模型特征表达能力;最后,引入SIoU,加快模型的收敛速度并提高准确率。所提方法在KITTI数据集上实验。结果显示:相较于原YOLOv8算法,所提算法的准确率、召回率、平均准确率分别提高0.7%、2.1%、2.1%,浮点运算数、帧率分别提高3.6 GFLOPS、24.64 frame/s,证明所提方法能够有效综合满足自动驾驶车辆行人及车辆检测任务中的实时性、精度提高以及降低漏检率和误检率需求。展开更多
文摘交通标志检测是自动驾驶系统、辅助驾驶系统(DAS)的重要组成部分,对行车安全具有重要意义。针对小目标交通标志检测时受光照、恶劣天气等因素影响而导致的检测精度低、漏检率高等问题,提出一种基于改进YOLOv5的小目标交通标志检测算法。首先,引入空间到深度卷积(SPD-Conv)对特征图进行下采样,有效避免小目标信息丢失,提高小目标敏感度。其次,基于加权双向特征金字塔网络(BiFPN)改进颈部网络,添加跨层连接以融合多尺度特征。之后,增加小目标检测层,增强小目标检测能力。最后,采用SIoU(Shape-aware Intersection over Union)损失函数,关注真实框与预测框的角度信息。实验结果表明,改进后的算法在中国交通标志检测数据集(CCTSDB2021)上的平均精度均值(mAP)达到83.5%,相较于原YOLOv5提升了7.2个百分点,检测速度满足实时性要求。
基金funded by the Undergraduate Higher Education Teaching and Research Project(No.FBJY20230216)Research Projects of Putian University(No.2023043)the Education Department of the Fujian Province Project(No.JAT220300).
文摘In recent years,the number of patientswith colon disease has increased significantly.Colon polyps are the precursor lesions of colon cancer.If not diagnosed in time,they can easily develop into colon cancer,posing a serious threat to patients’lives and health.A colonoscopy is an important means of detecting colon polyps.However,in polyp imaging,due to the large differences and diverse types of polyps in size,shape,color,etc.,traditional detection methods face the problem of high false positive rates,which creates problems for doctors during the diagnosis process.In order to improve the accuracy and efficiency of colon polyp detection,this question proposes a network model suitable for colon polyp detection(PD-YOLO).This method introduces the self-attention mechanism CBAM(Convolutional Block Attention Module)in the backbone layer based on YOLOv7,allowing themodel to adaptively focus on key information and ignore the unimportant parts.To help themodel do a better job of polyp localization and bounding box regression,add the SPD-Conv(Symmetric Positive Definite Convolution)module to the neck layer and use deconvolution instead of upsampling.Theexperimental results indicate that the PD-YOLO algorithm demonstrates strong robustness in colon polyp detection.Compared to the original YOLOv7,on the Kvasir-SEG dataset,PD-YOLO has shown an increase of 5.44 percentage points in AP@0.5,showcasing significant advantages over other mainstream methods.
文摘In this study,we propose Space-to-Depth and You Only Look Once Version 7(SPD-YOLOv7),an accurate and efficient method for detecting pests inmaize crops,addressing challenges such as small pest sizes,blurred images,low resolution,and significant species variation across different growth stages.To improve the model’s ability to generalize and its robustness,we incorporate target background analysis,data augmentation,and processing techniques like Gaussian noise and brightness adjustment.In target detection,increasing the depth of the neural network can lead to the loss of small target information.To overcome this,we introduce the Space-to-Depth Convolution(SPD-Conv)module into the SPD-YOLOv7 framework,replacing certain convolutional layers in the traditional system backbone and head network.This modification helps retain small target features and location information.Additionally,the Efficient Layer Aggregation Network-Wide(ELAN-W)module is combined with the Convolutional Block Attention Module(CBAM)attention mechanism to extract more efficient features.Experimental results show that the enhanced YOLOv7 model achieves an accuracy of 98.38%,with an average accuracy of 99.4%,outperforming the original YOLOv7 model.These improvements represent an increase of 2.46%in accuracy and 3.19%in average accuracy.The results indicate that the enhanced YOLOv7 model is more efficient and real-time,offering valuable insights for maize pest control.
文摘针对自动驾驶场景动态目标检测存在检测速度难以满足实时性要求、检测目标小或被遮挡造成的精度不足和误检、漏检率高等问题,提出一种基于改进YOLOv8模型的行人及车辆检测方法。首先,在Backbone骨干网络提取图像特征时使用对图像分辨率低、小目标检测友好的空间到深度卷积(a Space-to-Depth layer followed by a non-strided Convolution,SPD-Conv)模块;其次,在Neck层融合特征时增加上下文转换自注意力(Contextual Transformer,CoT)模块提高模型特征表达能力;最后,引入SIoU,加快模型的收敛速度并提高准确率。所提方法在KITTI数据集上实验。结果显示:相较于原YOLOv8算法,所提算法的准确率、召回率、平均准确率分别提高0.7%、2.1%、2.1%,浮点运算数、帧率分别提高3.6 GFLOPS、24.64 frame/s,证明所提方法能够有效综合满足自动驾驶车辆行人及车辆检测任务中的实时性、精度提高以及降低漏检率和误检率需求。