Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and diffe...Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.展开更多
Computer vision-based traffic object detection plays a critical role in road traffic safety.Under hazy weather conditions,images captured by road monitoring systems exhibit three main challenges:significant scale vari...Computer vision-based traffic object detection plays a critical role in road traffic safety.Under hazy weather conditions,images captured by road monitoring systems exhibit three main challenges:significant scale variations,abundant background noise,and diverse perspectives.These factors lead to insufficient detection accuracy and limited real-time performance in object detection algorithms.We propose AMC-YOLO an improved YOLOv11-based traffic detection algorithm to address these challenges.In this work,we replace the C3k block's bottleneck module with our novel attention-gate convolution(AGConv),which improves contextual information capture,enhances feature extraction,and reduces computational redundancy.Additionally,we introduce the multi-dilation sharing convolution(MDSC)module to prevent feature information loss during pooling operations,enhancing the model's sensitivity to multi-scale features.We design a lightweight and efficient cross-channel feature fusion module(CCFM)for the path aggregation neck to adaptively adjust feature weights and optimize the model's overall performance.Experimental results demonstrate that AMC-YOLO achieves a 1.1%improvement in mAP@0.5 and a 2.7%increase in mAP@0.5:0.95 compared to YOLOv11n.On graphics processing unit(GPU)hardware,it achieves real-time performance at 376(FPS)with only 2.6 million parameters,ensuring high-precision traffic detection while meeting deployment requirements on resource-constrained devices.展开更多
激光粉末床熔融(Laser powder bed fusion,LPBF)技术能够高精度制造复杂金属构件,其成形过程的质量波动与缺陷在线监测是目前研究的重点方向之一。本研究面向Ti-6Al-4V合金LPBF过程,构建了一种基于原位视觉感知的成形层形貌在线监测与...激光粉末床熔融(Laser powder bed fusion,LPBF)技术能够高精度制造复杂金属构件,其成形过程的质量波动与缺陷在线监测是目前研究的重点方向之一。本研究面向Ti-6Al-4V合金LPBF过程,构建了一种基于原位视觉感知的成形层形貌在线监测与分类识别方法,可实现对成形质量的预测。首先,通过单道熔道实验系统分析不同激光功率与扫描速度组合下的熔池行为及成形层光学形貌特征,将成形层形貌依据能量密度划分为低能区、适能区与高能区,为后续分类标注建立实验基准。随后开展9组不同工艺参数的成形实验,并采集逐层成形图像,表征成形质量,构建“工艺参数—成形层形貌—成形质量”之间的定量关联。基于采集的图像数据构建多模态增强数据集(包括几何增强、噪声注入与光照调整),并采用YOLOv5s模型学习成形层光学特征与能量输入状态之间的映射关系,实现对成形质量区间的在线识别与预测。实验结果表明,模型在100个Epoch训练后,可对高、中、低能量密度形貌的识别达到97%以上准确率(m AP>0.90)。研究揭示了成形工艺参数驱动下的成形质量与成形层光学形貌之间的对应关系,为LPBF过程质量在线监测与实时调控提供了可工程化的技术路径。展开更多
为了提升继电保护硬压板智能巡检的效率与准确性,研究提出一种改进的You Only Look Once(YOLO)目标检测算法。首先,对YOLOv5算法进行了针对性改进,采用普通卷积替换Focus结构,以适应移动端硬件限制;引入ShuffleNet V2架构作为主干网络,...为了提升继电保护硬压板智能巡检的效率与准确性,研究提出一种改进的You Only Look Once(YOLO)目标检测算法。首先,对YOLOv5算法进行了针对性改进,采用普通卷积替换Focus结构,以适应移动端硬件限制;引入ShuffleNet V2架构作为主干网络,降低模型的计算量和参数量。同时,设计了机器人智能巡检方法,在移动端设备上运行改进后的YOLO算法,并采用动态尺寸推理技术提高推理效率。结果表明,改进YOLO算法的精度达92.3%,平均检测速度为25 fps,均优于对比算法。在不同光照条件和遮挡情况下,机器人智能巡检系统检测准确率较高、漏检率和误检率较低,展现出良好的环境适应性和巡检有效性。研究方法提高了继电保护硬压板检测的智能化水平,有助于保障电力系统的安全稳定运行。展开更多
To avoid colliding with trees during its operation,a lawn mower robot must detect the trees.Existing tree detection methods suffer from low detection accuracy(missed detection)and the lack of a lightweight model.In th...To avoid colliding with trees during its operation,a lawn mower robot must detect the trees.Existing tree detection methods suffer from low detection accuracy(missed detection)and the lack of a lightweight model.In this study,a dataset of trees was constructed on the basis of a real lawn environment.According to the theory of channel incremental depthwise convolution and residual suppression,the Embedded-A module is proposed,which expands the depth of the feature map twice to form a residual structure to improve the lightweight degree of the model.According to residual fusion theory,the Embedded-B module is proposed,which improves the accuracy of feature-map downsampling by depthwise convolution and pooling fusion.The Embedded YOLO object detection network is formed by stacking the embedded modules and the fusion of feature maps of different resolutions.Experimental results on the testing set show that the Embedded YOLO tree detection algorithm has 84.17%and 69.91%average precision values respectively for trunk and spherical tree,and 77.04% mean average precision value.The number of convolution parameters is 1.78×10^(6),and the calculation amount is 3.85 billion float operations per second.The size of weight file is 7.11MB,and the detection speed can reach 179 frame/s.This study provides a theoretical basis for the lightweight application of the object detection algorithm based on deep learning for lawn mower robots.展开更多
文摘Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.
基金supported by the Wuhan Pilot construction of a strong Transportation Country Science and Technology Joint Research Project(No.2024-1-10).
文摘Computer vision-based traffic object detection plays a critical role in road traffic safety.Under hazy weather conditions,images captured by road monitoring systems exhibit three main challenges:significant scale variations,abundant background noise,and diverse perspectives.These factors lead to insufficient detection accuracy and limited real-time performance in object detection algorithms.We propose AMC-YOLO an improved YOLOv11-based traffic detection algorithm to address these challenges.In this work,we replace the C3k block's bottleneck module with our novel attention-gate convolution(AGConv),which improves contextual information capture,enhances feature extraction,and reduces computational redundancy.Additionally,we introduce the multi-dilation sharing convolution(MDSC)module to prevent feature information loss during pooling operations,enhancing the model's sensitivity to multi-scale features.We design a lightweight and efficient cross-channel feature fusion module(CCFM)for the path aggregation neck to adaptively adjust feature weights and optimize the model's overall performance.Experimental results demonstrate that AMC-YOLO achieves a 1.1%improvement in mAP@0.5 and a 2.7%increase in mAP@0.5:0.95 compared to YOLOv11n.On graphics processing unit(GPU)hardware,it achieves real-time performance at 376(FPS)with only 2.6 million parameters,ensuring high-precision traffic detection while meeting deployment requirements on resource-constrained devices.
文摘激光粉末床熔融(Laser powder bed fusion,LPBF)技术能够高精度制造复杂金属构件,其成形过程的质量波动与缺陷在线监测是目前研究的重点方向之一。本研究面向Ti-6Al-4V合金LPBF过程,构建了一种基于原位视觉感知的成形层形貌在线监测与分类识别方法,可实现对成形质量的预测。首先,通过单道熔道实验系统分析不同激光功率与扫描速度组合下的熔池行为及成形层光学形貌特征,将成形层形貌依据能量密度划分为低能区、适能区与高能区,为后续分类标注建立实验基准。随后开展9组不同工艺参数的成形实验,并采集逐层成形图像,表征成形质量,构建“工艺参数—成形层形貌—成形质量”之间的定量关联。基于采集的图像数据构建多模态增强数据集(包括几何增强、噪声注入与光照调整),并采用YOLOv5s模型学习成形层光学特征与能量输入状态之间的映射关系,实现对成形质量区间的在线识别与预测。实验结果表明,模型在100个Epoch训练后,可对高、中、低能量密度形貌的识别达到97%以上准确率(m AP>0.90)。研究揭示了成形工艺参数驱动下的成形质量与成形层光学形貌之间的对应关系,为LPBF过程质量在线监测与实时调控提供了可工程化的技术路径。
基金the National Natural Science Foundation of China (No.51275223)。
文摘To avoid colliding with trees during its operation,a lawn mower robot must detect the trees.Existing tree detection methods suffer from low detection accuracy(missed detection)and the lack of a lightweight model.In this study,a dataset of trees was constructed on the basis of a real lawn environment.According to the theory of channel incremental depthwise convolution and residual suppression,the Embedded-A module is proposed,which expands the depth of the feature map twice to form a residual structure to improve the lightweight degree of the model.According to residual fusion theory,the Embedded-B module is proposed,which improves the accuracy of feature-map downsampling by depthwise convolution and pooling fusion.The Embedded YOLO object detection network is formed by stacking the embedded modules and the fusion of feature maps of different resolutions.Experimental results on the testing set show that the Embedded YOLO tree detection algorithm has 84.17%and 69.91%average precision values respectively for trunk and spherical tree,and 77.04% mean average precision value.The number of convolution parameters is 1.78×10^(6),and the calculation amount is 3.85 billion float operations per second.The size of weight file is 7.11MB,and the detection speed can reach 179 frame/s.This study provides a theoretical basis for the lightweight application of the object detection algorithm based on deep learning for lawn mower robots.