针对柑橘病虫害图像实时采集与检测过程中无人机运动和摄像头对焦不准导致的模糊问题,提出了一种高效的去模糊算法,即在目标检测算法前增加去模糊预处理环节,旨在提升图像清晰度,并增强检测精度和鲁棒性。本研究在DeblurGAN-v2主干网络...针对柑橘病虫害图像实时采集与检测过程中无人机运动和摄像头对焦不准导致的模糊问题,提出了一种高效的去模糊算法,即在目标检测算法前增加去模糊预处理环节,旨在提升图像清晰度,并增强检测精度和鲁棒性。本研究在DeblurGAN-v2主干网络中采用FPN-MobileNetv3-small轻量化结构,并引入SKNet(Selective Kernel Networks)注意力机制自适应选择卷积核尺寸,以实现轻量化和高效去模糊。此外,使用自校准卷积网络(Self-Calibrated Convolutions)动态调整卷积视场,丰富卷积表达,实际解决去模糊过程中细节易丢失、特征融合效果不理想的问题。试验结果表明:与原始模型相比,改进后模型的峰值信噪比(Peak Signal to Noise Ratio,PSNR)提升了3.25 dB,结构相似性指数(Structural Similarity,SSIM)提升了9.26%,模型大小为16.4 M,处理速度为41.7 FPS。利用YOLOv8模型进行目标检测,在模型召回率没有明显降低的情况下,模型的准确率(Precision,P)和平均检测精度均值(Mean of Average Precision,mAP)分别提升了3.8、1.8个百分点,验证了该去模糊算法的有效性。本研究为柑橘病虫害检测提供了更高质量的图像,对实现精准农业和提高农产品经济价值具有重要意义。展开更多
针对配电网场景特性,对YOLOv10算法进行了改进。采用Mobile Net V3+CSPSPPF(结合跨阶段部分连接结构与快速空间金字塔池化的改进模块)作为主干网络,兼顾轻量化与小目标特征提取能力;引入双向特征金字塔网络优化多尺度特征融合,提高遮挡...针对配电网场景特性,对YOLOv10算法进行了改进。采用Mobile Net V3+CSPSPPF(结合跨阶段部分连接结构与快速空间金字塔池化的改进模块)作为主干网络,兼顾轻量化与小目标特征提取能力;引入双向特征金字塔网络优化多尺度特征融合,提高遮挡场景下缺陷定位精度;结合CIoU Loss(完整交并比损失)与Focal Loss(一种用于解决类别不平衡问题的损失函数)构建混合损失函数,解决缺陷样本类别不平衡问题。试验结果显示,改进YOLOv10算法综合性能更优,可满足配电网无人机巡检与边缘部署需求。展开更多
针对果园复杂环境下苹果检测模型大小与精度难以兼顾的问题,文章基于YOLOv8n(You Only Look Once Version 8n,YOLOv8n)提出YOLOv8n-Mob(You Only Look Once Version 8n-Mobile-NetV3,YOLOv8n-Mob)模型。该模型以移动网络版本3(Mobile Ne...针对果园复杂环境下苹果检测模型大小与精度难以兼顾的问题,文章基于YOLOv8n(You Only Look Once Version 8n,YOLOv8n)提出YOLOv8n-Mob(You Only Look Once Version 8n-Mobile-NetV3,YOLOv8n-Mob)模型。该模型以移动网络版本3(Mobile Network Version 3,MobileNetV3)为轻量化主干,结合分层通道注意力机制(Squeeze-and-Excitation,SE)/卷积块注意力模块(Convolutional Block Attention Module,CBAM),有效降低模型的计算复杂度;在颈部网络优化路径聚合网络—特征金字塔网络(Path Aggregation Network-Feature Pyramid Network,PAN-FPN),检测头中引入圆形感知交并比(Intersection over Union,IoU)损失函数,通过多模块协同优化提升模型检测精度。经试验,该模型参数量为0.9 MB、浮点运算次数(Floating Point Operations,FLOPs)为2.6 G、平均精度均值50(mean Average Precision 50,mAP50)为78.6%、帧速率(Frames Per Second,FPS)为625。对比试验与消融试验结果均表明,YOLOv8n-Mob模型在保持高检测精度的同时,显著降低了参数量与计算量,更适配果园复杂场景的部署需求。展开更多
With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has f...With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has found widespread application in the field of lane line detection.However,the accuracy of lane line segmentation is often compromised by factors such as changes in lighting conditions,occlusions,and wear and tear on the lane lines.Additionally,DeepLabv3+suffers from high memory consumption and challenges in deployment on embedded platforms.To address these issues,this paper proposes a lane line detection method for complex road scenes based on DeepLabv3+and MobileNetV4(MNv4).First,the lightweight MNv4 is adopted as the backbone network,and the standard convolutions in ASPP are replaced with depthwise separable convolutions.Second,a polarization attention mechanism is introduced after the ASPP module to enhance the model’s generalization capability.Finally,the Simple Linear Iterative Clustering(SLIC)superpixel segmentation algorithmis employed to preserve lane line edge information.MNv4-DeepLabv3+was tested on the TuSimple and CULane datasets.On the TuSimple dataset,theMean Intersection over Union(MIoU)and Mean Pixel Accuracy(mPA)improved by 1.01%and 7.49%,respectively.On the CULane dataset,MIoU andmPA increased by 3.33%and 7.74%,respectively.Thenumber of parameters decreased from 54.84 to 3.19 M.Experimental results demonstrate that MNv4-DeepLabv3+significantly optimizes model parameter count and enhances segmentation accuracy.展开更多
文摘针对柑橘病虫害图像实时采集与检测过程中无人机运动和摄像头对焦不准导致的模糊问题,提出了一种高效的去模糊算法,即在目标检测算法前增加去模糊预处理环节,旨在提升图像清晰度,并增强检测精度和鲁棒性。本研究在DeblurGAN-v2主干网络中采用FPN-MobileNetv3-small轻量化结构,并引入SKNet(Selective Kernel Networks)注意力机制自适应选择卷积核尺寸,以实现轻量化和高效去模糊。此外,使用自校准卷积网络(Self-Calibrated Convolutions)动态调整卷积视场,丰富卷积表达,实际解决去模糊过程中细节易丢失、特征融合效果不理想的问题。试验结果表明:与原始模型相比,改进后模型的峰值信噪比(Peak Signal to Noise Ratio,PSNR)提升了3.25 dB,结构相似性指数(Structural Similarity,SSIM)提升了9.26%,模型大小为16.4 M,处理速度为41.7 FPS。利用YOLOv8模型进行目标检测,在模型召回率没有明显降低的情况下,模型的准确率(Precision,P)和平均检测精度均值(Mean of Average Precision,mAP)分别提升了3.8、1.8个百分点,验证了该去模糊算法的有效性。本研究为柑橘病虫害检测提供了更高质量的图像,对实现精准农业和提高农产品经济价值具有重要意义。
文摘针对配电网场景特性,对YOLOv10算法进行了改进。采用Mobile Net V3+CSPSPPF(结合跨阶段部分连接结构与快速空间金字塔池化的改进模块)作为主干网络,兼顾轻量化与小目标特征提取能力;引入双向特征金字塔网络优化多尺度特征融合,提高遮挡场景下缺陷定位精度;结合CIoU Loss(完整交并比损失)与Focal Loss(一种用于解决类别不平衡问题的损失函数)构建混合损失函数,解决缺陷样本类别不平衡问题。试验结果显示,改进YOLOv10算法综合性能更优,可满足配电网无人机巡检与边缘部署需求。
文摘针对果园复杂环境下苹果检测模型大小与精度难以兼顾的问题,文章基于YOLOv8n(You Only Look Once Version 8n,YOLOv8n)提出YOLOv8n-Mob(You Only Look Once Version 8n-Mobile-NetV3,YOLOv8n-Mob)模型。该模型以移动网络版本3(Mobile Network Version 3,MobileNetV3)为轻量化主干,结合分层通道注意力机制(Squeeze-and-Excitation,SE)/卷积块注意力模块(Convolutional Block Attention Module,CBAM),有效降低模型的计算复杂度;在颈部网络优化路径聚合网络—特征金字塔网络(Path Aggregation Network-Feature Pyramid Network,PAN-FPN),检测头中引入圆形感知交并比(Intersection over Union,IoU)损失函数,通过多模块协同优化提升模型检测精度。经试验,该模型参数量为0.9 MB、浮点运算次数(Floating Point Operations,FLOPs)为2.6 G、平均精度均值50(mean Average Precision 50,mAP50)为78.6%、帧速率(Frames Per Second,FPS)为625。对比试验与消融试验结果均表明,YOLOv8n-Mob模型在保持高检测精度的同时,显著降低了参数量与计算量,更适配果园复杂场景的部署需求。
文摘With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has found widespread application in the field of lane line detection.However,the accuracy of lane line segmentation is often compromised by factors such as changes in lighting conditions,occlusions,and wear and tear on the lane lines.Additionally,DeepLabv3+suffers from high memory consumption and challenges in deployment on embedded platforms.To address these issues,this paper proposes a lane line detection method for complex road scenes based on DeepLabv3+and MobileNetV4(MNv4).First,the lightweight MNv4 is adopted as the backbone network,and the standard convolutions in ASPP are replaced with depthwise separable convolutions.Second,a polarization attention mechanism is introduced after the ASPP module to enhance the model’s generalization capability.Finally,the Simple Linear Iterative Clustering(SLIC)superpixel segmentation algorithmis employed to preserve lane line edge information.MNv4-DeepLabv3+was tested on the TuSimple and CULane datasets.On the TuSimple dataset,theMean Intersection over Union(MIoU)and Mean Pixel Accuracy(mPA)improved by 1.01%and 7.49%,respectively.On the CULane dataset,MIoU andmPA increased by 3.33%and 7.74%,respectively.Thenumber of parameters decreased from 54.84 to 3.19 M.Experimental results demonstrate that MNv4-DeepLabv3+significantly optimizes model parameter count and enhances segmentation accuracy.