事件相机是一种新型仿生视觉传感器,具有高动态范围、低延迟和无运动模糊等优点。本文提出了一种基于事件和图像的数据融合算法,名为EI-Fusion(Event and Image Fusion,事件图像融合),实现了事件相机与传统帧式相机的互补,有效提高了复...事件相机是一种新型仿生视觉传感器,具有高动态范围、低延迟和无运动模糊等优点。本文提出了一种基于事件和图像的数据融合算法,名为EI-Fusion(Event and Image Fusion,事件图像融合),实现了事件相机与传统帧式相机的互补,有效提高了复杂光照条件下的图像质量。此外,本文设计了一个基于光流跟踪的3-DoF姿态估计系统,并将融合结果作为输入以进一步评估该算法在姿态估计应用中的表现。实验结果表明,EI-Fusion的平均APE(Absolute Pose Error,绝对位姿误差)相较于原始图像降低了69%,大幅提升了姿态估计框架在昏暗场景中的性能。展开更多
In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series a...In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.展开更多
In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surro...In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surrogate covariables are available. An estimator which integrates Fourier series estimation and truncated series approximation methods is derived without any error model structure assumption between the true covariables and surrogate variables. Most importantly, our proposed methodology can be readily extended to the case that only some of covariates are measured with errors with the assistance of validation data. Under mild conditions, we derive the convergence rates of the proposed estimators. The finite-sample properties of the estimators are investigated through simulation studies.展开更多
目的受遮挡与累积误差因素的影响,现有目标6维(6 dimensions,6D)姿态实时追踪方法在复杂场景中表现不佳。为此,提出了一种高鲁棒性的刚体目标6D姿态实时追踪网络。方法在网络的整体设计上,将当前帧彩色图像和深度图像(red green blue-de...目的受遮挡与累积误差因素的影响,现有目标6维(6 dimensions,6D)姿态实时追踪方法在复杂场景中表现不佳。为此,提出了一种高鲁棒性的刚体目标6D姿态实时追踪网络。方法在网络的整体设计上,将当前帧彩色图像和深度图像(red green blue-depth map,RGB-D)与前一帧姿态估计结果经升维残差采样滤波和特征编码处理获得姿态差异,与前一帧姿态估计结果共同计算目标当前的6D姿态;在残差采样滤波模块的设计中,采用自门控swish(searching for activation functions)激活函数保留目标细节特征,提高目标姿态追踪的准确性;在特征聚合模块的设计中,将提取的特征分解为水平与垂直两个方向分量,分别从时间和空间上捕获长程依赖并保留位置信息,生成一组具有位置与时间感知的互补特征图,加强目标特征提取能力,从而加速网络收敛。结果实验选用YCBVideo(Yale-CMU-Berkeley-video)和YCBInEoAT(Yale-CMU-Berkeley in end-of-arm-tooling)数据集。实验结果表明,本文方法追踪速度达到90.9 Hz,追踪精度模型点平均距离(average distance of model points,ADD)和最近点的平均距离(average closest point distance,ADD-S)分别达到93.24及95.84,均高于同类相关方法。本文方法的追踪精度指标ADD和ADD-S在追踪精度和追踪速度上均领先于目前其他的刚体姿态追踪方法,与se(3)-TrackNet网络相比,本文方法在6000组少量合成数据训练的条件下分别高出25.95和30.91,在8000组少量合成数据训练的条件下分别高出31.72和28.75,在10000组少量合成数据训练的条件下分别高出35.57和21.07,且在严重遮挡场景下能够实现对目标的高鲁棒6D姿态追踪。结论本文网络在合成数据驱动条件下,可以更好地完成实时准确追踪目标6D姿态,网络收敛速度快,实验结果验证了本文方法的有效性。展开更多
文摘In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.
文摘In this article, we develop estimation approaches for nonparametric multiple regression measurement error models when both independent validation data on covariables and primary data on the response variable and surrogate covariables are available. An estimator which integrates Fourier series estimation and truncated series approximation methods is derived without any error model structure assumption between the true covariables and surrogate variables. Most importantly, our proposed methodology can be readily extended to the case that only some of covariates are measured with errors with the assistance of validation data. Under mild conditions, we derive the convergence rates of the proposed estimators. The finite-sample properties of the estimators are investigated through simulation studies.
文摘目的受遮挡与累积误差因素的影响,现有目标6维(6 dimensions,6D)姿态实时追踪方法在复杂场景中表现不佳。为此,提出了一种高鲁棒性的刚体目标6D姿态实时追踪网络。方法在网络的整体设计上,将当前帧彩色图像和深度图像(red green blue-depth map,RGB-D)与前一帧姿态估计结果经升维残差采样滤波和特征编码处理获得姿态差异,与前一帧姿态估计结果共同计算目标当前的6D姿态;在残差采样滤波模块的设计中,采用自门控swish(searching for activation functions)激活函数保留目标细节特征,提高目标姿态追踪的准确性;在特征聚合模块的设计中,将提取的特征分解为水平与垂直两个方向分量,分别从时间和空间上捕获长程依赖并保留位置信息,生成一组具有位置与时间感知的互补特征图,加强目标特征提取能力,从而加速网络收敛。结果实验选用YCBVideo(Yale-CMU-Berkeley-video)和YCBInEoAT(Yale-CMU-Berkeley in end-of-arm-tooling)数据集。实验结果表明,本文方法追踪速度达到90.9 Hz,追踪精度模型点平均距离(average distance of model points,ADD)和最近点的平均距离(average closest point distance,ADD-S)分别达到93.24及95.84,均高于同类相关方法。本文方法的追踪精度指标ADD和ADD-S在追踪精度和追踪速度上均领先于目前其他的刚体姿态追踪方法,与se(3)-TrackNet网络相比,本文方法在6000组少量合成数据训练的条件下分别高出25.95和30.91,在8000组少量合成数据训练的条件下分别高出31.72和28.75,在10000组少量合成数据训练的条件下分别高出35.57和21.07,且在严重遮挡场景下能够实现对目标的高鲁棒6D姿态追踪。结论本文网络在合成数据驱动条件下,可以更好地完成实时准确追踪目标6D姿态,网络收敛速度快,实验结果验证了本文方法的有效性。