Visual inertial odometry(VIO)problems have been extensively investigated in recent years.Existing VIO methods usually consider the localization or navigation issues of robots or autonomous vehicles in relatively small...Visual inertial odometry(VIO)problems have been extensively investigated in recent years.Existing VIO methods usually consider the localization or navigation issues of robots or autonomous vehicles in relatively small areas.This paper considers the problem of vision-aided inertial navigation(VIN)for aircrafts equipped with a strapdown inertial navigation system(SINS)and a downward-viewing camera.This is different from the traditional VIO problems in a larger working area with more precise inertial sensors.The goal is to utilize visual information to aid SINS to improve the navigation performance.In the multistate constraint Kalman filter(MSCKF)framework,we introduce an anchor frame to construct necessary models and derive corresponding Jacobians to implement a VIN filter to directly update the position in the Earth-centered Earth-fixed(ECEF)frame and the velocity and attitude in the local level frame by feature measurements.Due to its filtering-based property,the proposed method is naturally low computational demanding and is suitable for applications with high real-time requirements.Simulation and real-world data experiments demonstrate that the proposed method can considerably improve the navigation performance relative to the SINS.展开更多
文摘针对传统多状态约束卡尔曼滤波算法(MSCKF)在实现机器人室内定位时,速度和位置状态方程需要对IMU中加速度计的测量数据进行积分,存在漂移和累计误差,且加速度计受重力干扰问题,本文提出改进MSCKF算法.改进MSCKF算法避免使用加速度计传感器,利用轮式里程计传感器对平移测量较为精确的优点,将IMU中陀螺仪和轮式里程计的数据进行融合,改进MSCKF算法的扩展卡尔曼(EKF)状态方程.首先利用陀螺仪传感器的角速度数据得到改进EKF姿态方程,然后利用轮式里程计传感器的平移数据,结合姿态方程中的旋转信息得到改进EKF速度和位置方程.最后在机器人操作系统(ROS)上实现MSCKF及其改进算法,并结合Turtlebot2机器人在室内进行实验验证.实验结果表明,改进MSCKF算法的运动轨迹更接近于真实轨迹,定位精度较改进前所有提高,改进前平均闭环误差是0.429 m,改进后平均闭环误差是0.348 m.
基金supported by the National Natural Science Foundation of China(61773306).
文摘Visual inertial odometry(VIO)problems have been extensively investigated in recent years.Existing VIO methods usually consider the localization or navigation issues of robots or autonomous vehicles in relatively small areas.This paper considers the problem of vision-aided inertial navigation(VIN)for aircrafts equipped with a strapdown inertial navigation system(SINS)and a downward-viewing camera.This is different from the traditional VIO problems in a larger working area with more precise inertial sensors.The goal is to utilize visual information to aid SINS to improve the navigation performance.In the multistate constraint Kalman filter(MSCKF)framework,we introduce an anchor frame to construct necessary models and derive corresponding Jacobians to implement a VIN filter to directly update the position in the Earth-centered Earth-fixed(ECEF)frame and the velocity and attitude in the local level frame by feature measurements.Due to its filtering-based property,the proposed method is naturally low computational demanding and is suitable for applications with high real-time requirements.Simulation and real-world data experiments demonstrate that the proposed method can considerably improve the navigation performance relative to the SINS.