摘要
为提升无人车在结构化场景中的定位精度,提出了一种基于场景结构化语义辅助的无人车惯性/视觉导航算法。首先,提出了一种基于空间一致性约束的深度估计优化方法,通过地面锚点在鸟瞰图和相机坐标系中的几何约束修正神经网络估计的单目深度,恢复尺度并提升点云深度估计精度。其次,提出了一种深度信息辅助的姿态估计方法,在局部亚特兰大世界中恢复线段端点三维坐标并基于余弦相似度聚类,有效剔除错误线段,提高了姿态估计鲁棒性。最后,提出了一种基于鸟瞰图线特征约束的位置信息优化方法,通过构建线特征重投影残差和事件语义回环检测算法,提升了局部和全局的定位精度。实验结果表明,所提算法的定位精度相较于UV-SLAM、S-VIO与Manhattan-SLAM分别平均提升了29.7%、51.7%和58.1%。
To enhance the positioning accuracy of unmanned vehicles in structured scenarios,an inertial/visual navigation algorithm for autonomous vehicles assisted by scenario structural semantics is proposed.Firstly,a spatial consistency-constrained depth estimation optimization method is developed,which corrects monocular depth estimations from neural networks through geometric constraints of ground anchor points in both bird's eye view(BEV)and camera coordinate systems,thereby recovering scale and improving point cloud depth estimation accuracy.Secondly,a depth-assisted pose estimation method is presented,which reconstructs 3D coordinates of line segment endpoints in a local Atlanta world and employs cosine similarity clustering to effectively eliminate erroneous line segments,thus enhancing the robustness of pose estimation.Finally,a BEV line feature-constrained position optimization method is proposed,incorporating line feature reprojection residuals and event-based semantic loop detection algorithms to improve both local and global positioning accuracy.Experimental results demonstrate that the proposed algorithm achieves 29.7%,51.7%,and 58.1%higher positioning accuracy compared to UV-SLAM,S-VIO and Manhattan-SLAM respectively.
作者
袁诚
韩阿东
赖际舟
吕品
YUAN Cheng;HAN Adong;LAI Jizhou;LYU Pin(Navigation Research Center,College of Automation Engineering,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China)
出处
《中国惯性技术学报》
北大核心
2025年第12期1199-1208,共10页
Journal of Chinese Inertial Technology
基金
国家自然科学基金(62273178)。
关键词
视觉同步定位与地图构建
自动驾驶车辆
结构约束
深度估计
鸟瞰图
visual simultaneous localization and mapping
autonomous vehicles
structural constraints
depth estimation
bird’s eye view