For some important object recognition applications such as intelligent robots and unmanned driving, images are collected on a consecutive basis and associated among themselves, besides, the scenes have steady prior fe...For some important object recognition applications such as intelligent robots and unmanned driving, images are collected on a consecutive basis and associated among themselves, besides, the scenes have steady prior features. Yet existing technologies do not take full advantage of this information. In order to take object recognition further than existing algorithms in the above application, an object recognition method that fuses temporal sequence with scene priori information is proposed. This method first employs YOLOv3 as the basic algorithm to recognize objects in single-frame images, then the DeepSort algorithm to establish association among potential objects recognized in images of different moments, and finally the confidence fusion method and temporal boundary processing method designed herein to fuse, at the decision level, temporal sequence information with scene priori information. Experiments using public datasets and self-built industrial scene datasets show that due to the expansion of information sources, the quality of single-frame images has less impact on the recognition results, whereby the object recognition is greatly improved. It is presented herein as a widely applicable framework for the fusion of information under multiple classes. All the object recognition algorithms that output object class, location information and recognition confidence at the same time can be integrated into this information fusion framework to improve performance.展开更多
Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high qualit...Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high quality images in high dynamic range scene. First,a set of multi-exposure images is obtained by multiple exposures in a same scene and their brightness condition is analyzed. Then,multi-exposure images under the same scene are decomposed using dual-tree complex wavelet transform( DT-CWT),and their low and high frequency components are obtained. Weight maps according to the brightness condition are assigned to the low components for fusion. Maximizing the region Sum Modified-Laplacian( SML) is adopted for high-frequency components fusing. Finally,the fused image is acquired by subjecting the low and high frequency coefficients to inverse DT-CWT.Experimental results show that the proposed approach generates high quality results with uniform distributed brightness and rich details. The proposed method is efficient and robust in varies scenes.展开更多
目的工业缺陷检测是现代工业质量控制中至关重要的一环,针对工业多模态缺陷检测场景下,捕捉不同形状大小、在RGB图像上感知度低的缺陷,以及减少单模态原始特征空间内存在的噪声对多模态信息交互的干扰的挑战,提出了一种基于归一化流的...目的工业缺陷检测是现代工业质量控制中至关重要的一环,针对工业多模态缺陷检测场景下,捕捉不同形状大小、在RGB图像上感知度低的缺陷,以及减少单模态原始特征空间内存在的噪声对多模态信息交互的干扰的挑战,提出了一种基于归一化流的多模态多尺度缺陷检测方法。方法首先,使用Vision Transformer和Point Transformer对RGB图像和3D点云两个模态的信息提取第1、3、11块的特征构建特征金字塔,保留低层次特征的空间信息助力缺陷定位任务,并提高模型对不同形状大小缺陷的鲁棒性;其次,为了简化多模态交互,使用过点特征对齐算法将3D点云特征对齐至RGB图像所在平面,通过构建对比学习矩阵的方式实现无监督多模态特征融合,促进不同模态之间信息的交互;此外,通过设计代理任务的方式将信息瓶颈机制扩展至无监督,并在尽可能保留原始信息的同时,减少噪声干扰得到更充分有力的多模态表示;最后,使用多尺度归一化流结构捕捉不同尺度的特征信息,实现不同尺度特征之间的交互。结果本文方法在MVTec-3D AD数据集上进行性能评估,实验结果显示Detection AUCROC(area under the curve of the receiveroperating characteristic)指标达到93.3%,SegmentationAUPRO(area under the precision-recall overlap)指标达到96.1%,Segmentation AUCROC指标达到98.8%,优于大多数现有的多模态缺陷检测方法。结论本文方法对于不同形状大小、在RGB图像上感知度低的缺陷有较好的检测效果,不但减少了原始特征空间内噪声对多模态表示的影响,并且对不同形状大小的缺陷具有一定的泛化能力,较好地满足了现代工业对于缺陷检测的要求。展开更多
文摘For some important object recognition applications such as intelligent robots and unmanned driving, images are collected on a consecutive basis and associated among themselves, besides, the scenes have steady prior features. Yet existing technologies do not take full advantage of this information. In order to take object recognition further than existing algorithms in the above application, an object recognition method that fuses temporal sequence with scene priori information is proposed. This method first employs YOLOv3 as the basic algorithm to recognize objects in single-frame images, then the DeepSort algorithm to establish association among potential objects recognized in images of different moments, and finally the confidence fusion method and temporal boundary processing method designed herein to fuse, at the decision level, temporal sequence information with scene priori information. Experiments using public datasets and self-built industrial scene datasets show that due to the expansion of information sources, the quality of single-frame images has less impact on the recognition results, whereby the object recognition is greatly improved. It is presented herein as a widely applicable framework for the fusion of information under multiple classes. All the object recognition algorithms that output object class, location information and recognition confidence at the same time can be integrated into this information fusion framework to improve performance.
基金Supported by the National Natural Science Foundation of China(No.61308099,61304032)
文摘Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high quality images in high dynamic range scene. First,a set of multi-exposure images is obtained by multiple exposures in a same scene and their brightness condition is analyzed. Then,multi-exposure images under the same scene are decomposed using dual-tree complex wavelet transform( DT-CWT),and their low and high frequency components are obtained. Weight maps according to the brightness condition are assigned to the low components for fusion. Maximizing the region Sum Modified-Laplacian( SML) is adopted for high-frequency components fusing. Finally,the fused image is acquired by subjecting the low and high frequency coefficients to inverse DT-CWT.Experimental results show that the proposed approach generates high quality results with uniform distributed brightness and rich details. The proposed method is efficient and robust in varies scenes.
文摘目的工业缺陷检测是现代工业质量控制中至关重要的一环,针对工业多模态缺陷检测场景下,捕捉不同形状大小、在RGB图像上感知度低的缺陷,以及减少单模态原始特征空间内存在的噪声对多模态信息交互的干扰的挑战,提出了一种基于归一化流的多模态多尺度缺陷检测方法。方法首先,使用Vision Transformer和Point Transformer对RGB图像和3D点云两个模态的信息提取第1、3、11块的特征构建特征金字塔,保留低层次特征的空间信息助力缺陷定位任务,并提高模型对不同形状大小缺陷的鲁棒性;其次,为了简化多模态交互,使用过点特征对齐算法将3D点云特征对齐至RGB图像所在平面,通过构建对比学习矩阵的方式实现无监督多模态特征融合,促进不同模态之间信息的交互;此外,通过设计代理任务的方式将信息瓶颈机制扩展至无监督,并在尽可能保留原始信息的同时,减少噪声干扰得到更充分有力的多模态表示;最后,使用多尺度归一化流结构捕捉不同尺度的特征信息,实现不同尺度特征之间的交互。结果本文方法在MVTec-3D AD数据集上进行性能评估,实验结果显示Detection AUCROC(area under the curve of the receiveroperating characteristic)指标达到93.3%,SegmentationAUPRO(area under the precision-recall overlap)指标达到96.1%,Segmentation AUCROC指标达到98.8%,优于大多数现有的多模态缺陷检测方法。结论本文方法对于不同形状大小、在RGB图像上感知度低的缺陷有较好的检测效果,不但减少了原始特征空间内噪声对多模态表示的影响,并且对不同形状大小的缺陷具有一定的泛化能力,较好地满足了现代工业对于缺陷检测的要求。