With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has f...With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has found widespread application in the field of lane line detection.However,the accuracy of lane line segmentation is often compromised by factors such as changes in lighting conditions,occlusions,and wear and tear on the lane lines.Additionally,DeepLabv3+suffers from high memory consumption and challenges in deployment on embedded platforms.To address these issues,this paper proposes a lane line detection method for complex road scenes based on DeepLabv3+and MobileNetV4(MNv4).First,the lightweight MNv4 is adopted as the backbone network,and the standard convolutions in ASPP are replaced with depthwise separable convolutions.Second,a polarization attention mechanism is introduced after the ASPP module to enhance the model’s generalization capability.Finally,the Simple Linear Iterative Clustering(SLIC)superpixel segmentation algorithmis employed to preserve lane line edge information.MNv4-DeepLabv3+was tested on the TuSimple and CULane datasets.On the TuSimple dataset,theMean Intersection over Union(MIoU)and Mean Pixel Accuracy(mPA)improved by 1.01%and 7.49%,respectively.On the CULane dataset,MIoU andmPA increased by 3.33%and 7.74%,respectively.Thenumber of parameters decreased from 54.84 to 3.19 M.Experimental results demonstrate that MNv4-DeepLabv3+significantly optimizes model parameter count and enhances segmentation accuracy.展开更多
[目的/意义]针对自然环境干扰下检测模型对辣椒叶片病虫害的特征提取不充分、容易忽视目标物体的边缘信息,以及小块病斑与虫害病灶易漏检等问题,本研究提出一种轻量化辣椒叶片病害检测算法,即YOLOMDFR(You Only Look Once Version 12-MD...[目的/意义]针对自然环境干扰下检测模型对辣椒叶片病虫害的特征提取不充分、容易忽视目标物体的边缘信息,以及小块病斑与虫害病灶易漏检等问题,本研究提出一种轻量化辣椒叶片病害检测算法,即YOLOMDFR(You Only Look Once Version 12-MDFR)。[方法]基于YOLOv12s模型做出改进。首先用两个堆叠的3×3的深度可分离卷积代替一个5×5的深度可分离卷积以改进MobileNetV4,并将其代替YOLOv12s的原始骨干网络实现骨干网络轻量化。其次为提高小目标物体的特征提取能力,提出了多维频域互补自注意力机制模块(Dimensional Frequency Reciprocal Attention Mixing Transformer,D-F-Ramit)。最后利用D-F-Ramit与RAGConv(Residual Aggrega⁃tion Gate-Controlled Convolution)重新设计颈部网络,增强模型的特征融合能力和信息传递能力。基于以上改进提出YOLO-MDFR目标检测算法。[结果和讨论]实验结果表明,本研究提出的YOLO-MDFR模型在实验数据集上的平均识别精确度达到95.6%,与YOLOv12s模型相比,平均识别精度提高了2.0%,同时参数量下降了61.5%,计算量下降了68.5%,帧率达到43.4帧/s。[结论]本研究通过系统性的架构优化,在保持模型轻量化的同时显著提升了检测性能,实现了计算效率与检测精度的最佳平衡。展开更多
文摘With the continuous development of artificial intelligence and computer vision technology,numerous deep learning-based lane line detection methods have emerged.DeepLabv3+,as a classic semantic segmentation model,has found widespread application in the field of lane line detection.However,the accuracy of lane line segmentation is often compromised by factors such as changes in lighting conditions,occlusions,and wear and tear on the lane lines.Additionally,DeepLabv3+suffers from high memory consumption and challenges in deployment on embedded platforms.To address these issues,this paper proposes a lane line detection method for complex road scenes based on DeepLabv3+and MobileNetV4(MNv4).First,the lightweight MNv4 is adopted as the backbone network,and the standard convolutions in ASPP are replaced with depthwise separable convolutions.Second,a polarization attention mechanism is introduced after the ASPP module to enhance the model’s generalization capability.Finally,the Simple Linear Iterative Clustering(SLIC)superpixel segmentation algorithmis employed to preserve lane line edge information.MNv4-DeepLabv3+was tested on the TuSimple and CULane datasets.On the TuSimple dataset,theMean Intersection over Union(MIoU)and Mean Pixel Accuracy(mPA)improved by 1.01%and 7.49%,respectively.On the CULane dataset,MIoU andmPA increased by 3.33%and 7.74%,respectively.Thenumber of parameters decreased from 54.84 to 3.19 M.Experimental results demonstrate that MNv4-DeepLabv3+significantly optimizes model parameter count and enhances segmentation accuracy.
文摘[目的/意义]针对自然环境干扰下检测模型对辣椒叶片病虫害的特征提取不充分、容易忽视目标物体的边缘信息,以及小块病斑与虫害病灶易漏检等问题,本研究提出一种轻量化辣椒叶片病害检测算法,即YOLOMDFR(You Only Look Once Version 12-MDFR)。[方法]基于YOLOv12s模型做出改进。首先用两个堆叠的3×3的深度可分离卷积代替一个5×5的深度可分离卷积以改进MobileNetV4,并将其代替YOLOv12s的原始骨干网络实现骨干网络轻量化。其次为提高小目标物体的特征提取能力,提出了多维频域互补自注意力机制模块(Dimensional Frequency Reciprocal Attention Mixing Transformer,D-F-Ramit)。最后利用D-F-Ramit与RAGConv(Residual Aggrega⁃tion Gate-Controlled Convolution)重新设计颈部网络,增强模型的特征融合能力和信息传递能力。基于以上改进提出YOLO-MDFR目标检测算法。[结果和讨论]实验结果表明,本研究提出的YOLO-MDFR模型在实验数据集上的平均识别精确度达到95.6%,与YOLOv12s模型相比,平均识别精度提高了2.0%,同时参数量下降了61.5%,计算量下降了68.5%,帧率达到43.4帧/s。[结论]本研究通过系统性的架构优化,在保持模型轻量化的同时显著提升了检测性能,实现了计算效率与检测精度的最佳平衡。