期刊文献+
共找到36篇文章
< 1 2 >
每页显示 20 50 100
NeOR: neural exploration with feature-based visual odometry and tracking-failure-reduction policy
1
作者 ZHU Ziheng LIU Jialing +2 位作者 CHEN Kaiqi TONG Qiyi LIU Ruyu 《Optoelectronics Letters》 2025年第5期290-297,共8页
Embodied visual exploration is critical for building intelligent visual agents. This paper presents the neural exploration with feature-based visual odometry and tracking-failure-reduction policy(Ne OR), a framework f... Embodied visual exploration is critical for building intelligent visual agents. This paper presents the neural exploration with feature-based visual odometry and tracking-failure-reduction policy(Ne OR), a framework for embodied visual exploration that possesses the efficient exploration capabilities of deep reinforcement learning(DRL)-based exploration policies and leverages feature-based visual odometry(VO) for more accurate mapping and positioning results. An improved local policy is also proposed to reduce tracking failures of feature-based VO in weakly textured scenes through a refined multi-discrete action space, keyframe fusion, and an auxiliary task. The experimental results demonstrate that Ne OR has better mapping and positioning accuracy compared to other entirely learning-based exploration frameworks and improves the robustness of feature-based VO by significantly reducing tracking failures in weakly textured scenes. 展开更多
关键词 intelligent visual agents deep reinforcement learning drl based embodied visual exploration feature based visual odometry tracking failure reduction policy neural exploration deep reinforcement learning
原文传递
Dynamic SLAM Visual Odometry Based on Instance Segmentation:A Comprehensive Review
2
作者 Jiansheng Peng Qing Yang +3 位作者 Dunhua Chen Chengjun Yang Yong Xu Yong Qin 《Computers, Materials & Continua》 SCIE EI 2024年第1期167-196,共30页
Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,... Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals. 展开更多
关键词 Dynamic SLAM instance segmentation visual odometry
在线阅读 下载PDF
Lightweight hybrid visual-inertial odometry with closed-form zero velocity update 被引量:7
3
作者 QIU Xiaochen ZHANG Hai FU Wenxing 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第12期3344-3359,共16页
Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-... Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community. 展开更多
关键词 Inverse depth parametrization Kalman filter Online calibration Visual-inertial odometry Zero velocity update
原文传递
Overfitting Reduction of Pose Estimation for Deep Learning Visual Odometry 被引量:5
4
作者 Xiaohan Yang Xiaojuan Li +2 位作者 Yong Guan Jiadong Song Rui Wang 《China Communications》 SCIE CSCD 2020年第6期196-210,共15页
Error or drift is frequently produced in pose estimation based on geometric"feature detection and tracking"monocular visual odometry(VO)when the speed of camera movement exceeds 1.5 m/s.While,in most VO meth... Error or drift is frequently produced in pose estimation based on geometric"feature detection and tracking"monocular visual odometry(VO)when the speed of camera movement exceeds 1.5 m/s.While,in most VO methods based on deep learning,weight factors are in the form of fixed values,which are easy to lead to overfitting.A new measurement system,for monocular visual odometry,named Deep Learning Visual Odometry(DLVO),is proposed based on neural network.In this system,Convolutional Neural Network(CNN)is used to extract feature and perform feature matching.Moreover,Recurrent Neural Network(RNN)is used for sequence modeling to estimate camera’s 6-dof poses.Instead of fixed weight values of CNN,Bayesian distribution of weight factors are introduced in order to effectively solve the problem of network overfitting.The 18,726 frame images in KITTI dataset are used for training network.This system can increase the generalization ability of network model in prediction process.Compared with original Recurrent Convolutional Neural Network(RCNN),our method can reduce the loss of test model by 5.33%.And it’s an effective method in improving the robustness of translation and rotation information than traditional VO methods. 展开更多
关键词 visual odometry neural network pose estimation bayesian distribution OVERFITTING
在线阅读 下载PDF
Accurate parameter estimation of systematic odometry errors for two-wheel differential mobile robots 被引量:3
5
作者 Changbae Jung Woojin Chung 《Journal of Measurement Science and Instrumentation》 CAS 2012年第3期268-272,共5页
Odometry using incremental wheel encoder odometry suffers from the accumulation of kinematic sensors provides the relative robot pose estimation. However, the modeling errors of wheels as the robot's travel distance ... Odometry using incremental wheel encoder odometry suffers from the accumulation of kinematic sensors provides the relative robot pose estimation. However, the modeling errors of wheels as the robot's travel distance increases. Therefore, the systematic errors need to be calibrated. The University of Michigan Benchmark(UMBmark) method is a widely used calibration scheme of the systematic errors in two wheel differential mobile robots. In this paper, the accurate parameter estimation of systematic errors is proposed by extending the conventional method. The contributions of this paper can be summarized as two issues. The first contribution is to present new calibration equations that reduce the systematic odometry errors. The new equations were derived to overcome the limitation of conventional schemes. The second contribu tion is to propose the design guideline of the test track for calibration experiments. The calibration performance can be im proved by appropriate design of the test track. The simulations and experimental results show that the accurate parameter es timation can be implemented by the proposed method. 展开更多
关键词 calibration kinematic modeling errors mobile robots odometry test tracks
在线阅读 下载PDF
Science Letters:Visual odometry for road vehicles—feasibility analysis 被引量:2
6
作者 SOTELO Miguel-angel GARCíA Roberto +4 位作者 PARRA Ignacio FERNNDEZ David GAVILN Miguel LVAREZ Sergio NARANJO José-eugenio 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2007年第12期2017-2020,共4页
Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS mea... Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter. 展开更多
关键词 3D visual odometry Ego-motion estimation RAndom SAmple Consensus (RANSAC) Photogrametric approach
在线阅读 下载PDF
Human Visual Attention Mechanism-Inspired Point-and-Line Stereo Visual Odometry for Environments with Uneven Distributed Features 被引量:1
7
作者 Chang Wang Jianhua Zhang +2 位作者 Yan Zhao Youjie Zhou Jincheng Jiang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第3期191-204,共14页
Visual odometry is critical in visual simultaneous localization and mapping for robot navigation.However,the pose estimation performance of most current visual odometry algorithms degrades in scenes with unevenly dist... Visual odometry is critical in visual simultaneous localization and mapping for robot navigation.However,the pose estimation performance of most current visual odometry algorithms degrades in scenes with unevenly distributed features because dense features occupy excessive weight.Herein,a new human visual attention mechanism for point-and-line stereo visual odometry,which is called point-line-weight-mechanism visual odometry(PLWM-VO),is proposed to describe scene features in a global and balanced manner.A weight-adaptive model based on region partition and region growth is generated for the human visual attention mechanism,where sufficient attention is assigned to position-distinctive objects(sparse features in the environment).Furthermore,the sum of absolute differences algorithm is used to improve the accuracy of initialization for line features.Compared with the state-of-the-art method(ORB-VO),PLWM-VO show a 36.79%reduction in the absolute trajectory error on the Kitti and Euroc datasets.Although the time consumption of PLWM-VO is higher than that of ORB-VO,online test results indicate that PLWM-VO satisfies the real-time demand.The proposed algorithm not only significantly promotes the environmental adaptability of visual odometry,but also quantitatively demonstrates the superiority of the human visual attention mechanism. 展开更多
关键词 Visual odometry Human visual attention mechanism Environmental adaptability Uneven distributed features
在线阅读 下载PDF
Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision 被引量:2
8
作者 Han Wang Wei Mou +3 位作者 Gerald Seet Mao-Hai Li M.W.S.Lau Dan-Wei Wang 《International Journal of Automation and computing》 EI CSCD 2013年第5期397-404,共8页
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error acc... In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM). 展开更多
关键词 Visual odometry ego-motion principal direction ceiling vision simultaneous localization and mapping(SLAM)
原文传递
A Study on Planetary Visual Odometry Optimization: Time Constraints and Reliability 被引量:1
9
作者 Enrica Zereik Davide Ducco Fabio Frassinelli Giuseppe Casalino 《Computer Technology and Application》 2011年第5期378-388,共11页
Robust and efficient vision systems are essential in such a way to support different kinds of autonomous robotic behaviors linked to the capability to interact with the surrounding environment, without relying on any ... Robust and efficient vision systems are essential in such a way to support different kinds of autonomous robotic behaviors linked to the capability to interact with the surrounding environment, without relying on any a priori knowledge. Within space missions, above all those involving rovers that have to explore planetary surfaces, vision can play a key role in the improvement of autonomous navigation functionalities: besides obstacle avoidance and hazard detection along the traveling, vision can in fact provide accurate motion estimation in order to constantly monitor all paths executed by the rover. The present work basically regards the development of an effective visual odometry system, focusing as much as possible on issues such as continuous operating mode, system speed and reliability. 展开更多
关键词 Visual odometry stereo vision speeded up robust feature (SURF) planetary rover
在线阅读 下载PDF
Semi-Direct Visual Odometry and Mapping System with RGB-D Camera
10
作者 Xinliang Zhong Xiao Luo +1 位作者 Jiaheng Zhao Yutong Huang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第1期83-93,共11页
In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera,which combines the merits of both feature based and direct based methods.The presented system directly estimates the camer... In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera,which combines the merits of both feature based and direct based methods.The presented system directly estimates the camera motion of two consecutive RGB-D frames by minimizing the photometric error.To permit outliers and noise,a robust sensor model built upon the t-distribution and an error function mixing depth and photometric errors are used to enhance the accuracy and robustness.Local graph optimization based on key frames is used to reduce the accumulative error and refine the local map.The loop closure detection method,which combines the appearance similarity method and spatial location constraints method,increases the speed of detection.Experimental results demonstrate that the proposed approach achieves higher accuracy on the motion estimation and environment reconstruction compared to the other state-of-the-art methods. Moreover,the proposed approach works in real-time on a laptop without a GPU,which makes it attractive for robots equipped with limited computational resources. 展开更多
关键词 RGB-D simultaneous LOCALIZATION and mapping(SLAM) visual odometry LOCALIZATION 3D MAPPING LOOP CLOSURE detection
在线阅读 下载PDF
PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration
11
作者 Yao Xiao Xiaogang Ruan Xiaoqing Zhu 《Journal of Autonomous Intelligence》 2018年第2期29-35,共7页
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ... Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 展开更多
关键词 PHOTOMETRIC Calibration Visual-Inertial odometry SIMULTANEOUS Localization and Mapping Robot Navigation
在线阅读 下载PDF
Legged odometry based on fusion of leg kinematics and IMU information in a humanoid robot
12
作者 Huailiang Ma Aiguo Song +3 位作者 Jingwei Li Ligang Ge Chunjiang Fu Guoteng Zhang 《Biomimetic Intelligence & Robotics》 2025年第1期87-94,共8页
Position and velocity estimation are the key technologies to improve the motion control ability of humanoid robots.Aiming at solving the positioning problem of humanoid robots,we have designed a legged odometry algori... Position and velocity estimation are the key technologies to improve the motion control ability of humanoid robots.Aiming at solving the positioning problem of humanoid robots,we have designed a legged odometry algorithm based on forward kinematics and the feed back of IMU.We modeled the forward kinematics of the leg of the humanoid robot and used Kalman filter to fuse the kinematics information with IMU data,resulting in an accurate estimate of the humanoid robot’s position and velocity.This odometry method can be applied to different humanoid robots,requiring only that the robot is equipped with joint encoders and an IMU.It can also be extended to other legged robots.The effectiveness of the legged odometry scheme was demonstrated through simulations and physical tests conducted with the Walker2 humanoid robot. 展开更多
关键词 Humanoid robots State estimation Legged odometry Kalman filter
原文传递
KLT-VIO:Real-time Monocular Visual-Inertial Odometry
13
作者 Yuhao Jin Hang Li Shoulin Yin 《IJLAI Transactions on Science and Engineering》 2024年第1期8-16,共9页
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte... This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation. 展开更多
关键词 Visual-inertial odometry Opticalflow Point features Line features Bundle adjustment
在线阅读 下载PDF
Vision-aided inertial navigation for low altitude aircraft with a downward-viewing camera
14
作者 ZHOU Ruihu TONG Mengqi GAO Yongxin 《Journal of Systems Engineering and Electronics》 2025年第3期825-834,共10页
Visual inertial odometry(VIO)problems have been extensively investigated in recent years.Existing VIO methods usually consider the localization or navigation issues of robots or autonomous vehicles in relatively small... Visual inertial odometry(VIO)problems have been extensively investigated in recent years.Existing VIO methods usually consider the localization or navigation issues of robots or autonomous vehicles in relatively small areas.This paper considers the problem of vision-aided inertial navigation(VIN)for aircrafts equipped with a strapdown inertial navigation system(SINS)and a downward-viewing camera.This is different from the traditional VIO problems in a larger working area with more precise inertial sensors.The goal is to utilize visual information to aid SINS to improve the navigation performance.In the multistate constraint Kalman filter(MSCKF)framework,we introduce an anchor frame to construct necessary models and derive corresponding Jacobians to implement a VIN filter to directly update the position in the Earth-centered Earth-fixed(ECEF)frame and the velocity and attitude in the local level frame by feature measurements.Due to its filtering-based property,the proposed method is naturally low computational demanding and is suitable for applications with high real-time requirements.Simulation and real-world data experiments demonstrate that the proposed method can considerably improve the navigation performance relative to the SINS. 展开更多
关键词 visual inertial odometry(VIO) strapdown inertial navigation system(SINS) multi-state constraint Kalman filter(MSCKF)
在线阅读 下载PDF
Fast and accurate visual odometry from a monocular camera 被引量:2
15
作者 Xin YANG Tangli XUE +1 位作者 Hongcheng LUO Jiabin GUO 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第6期1326-1336,共11页
This paper aims at a semi-dense visual odometry system that is accurate,robust,and able to run realtime on mobile devices,such as smartphones,AR glasses and small drones.The key contributions of our system include:1)t... This paper aims at a semi-dense visual odometry system that is accurate,robust,and able to run realtime on mobile devices,such as smartphones,AR glasses and small drones.The key contributions of our system include:1)the modified pyramidal Lucas-Kanade algorithm which incorporates spatial and depth constraints for fast and accurate camera pose estimation;2)adaptive image resizing based on inertial sensors for greatly accelerating tracking speed with little accuracy degradation;and 3)an ultrafast binary feature description based directly on intensities of a resized and smoothed image patch around each pixel that is sufficiently effective for relocalization.A quantitative evaluation on public datasets demonstrates that our system achieves better tracking accuracy and up to about 2X faster tracking speed comparing to the state-of-the-art monocular SLAM system:LSD-SLAM.For the relocalization task,our system is 2.0X∼4.6X faster than DBoW2 and achieves a similar accuracy. 展开更多
关键词 visual odometry mobile devices direct tracking relocalization inertial sensing binary feature
原文传递
Design of an enhanced visual odometry by building and matching compressive panoramic landmarks online 被引量:2
16
作者 Wei LU Zhi-yu XIANG Ji-lin LIU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2015年第2期152-165,共14页
Efficient and precise localization is a prerequisite for the intelligent navigation of mobile robots. Traditional visual localization systems, such as visual odometry (VO) and simultaneous localization and mapping (SL... Efficient and precise localization is a prerequisite for the intelligent navigation of mobile robots. Traditional visual localization systems, such as visual odometry (VO) and simultaneous localization and mapping (SLAM), suffer from two shortcomings: a drift problem caused by accumulated localization error, and erroneous motion estimation due to illumination variation and moving objects. In this paper, we propose an enhanced VO by introducing a panoramic camera into the traditional stereo-only VO system. Benefiting from the 360° field of view, the panoramic camera is responsible for three tasks: (1) detect- ing road junctions and building a landmark library online; (2) correcting the robot's position when the landmarks are revisited with any orientation; (3) working as a panoramic compass when the stereo VO cannot provide reliable positioning results. To use the large-sized panoramic images efficiently, the concept of compressed sensing is introduced into the solution and an adap- tive compressive feature is presented. Combined with our previous two-stage local binocular bundle adjustment (TLBBA) stereo VO, the new system can obtain reliable positioning results in quasi-real time. Experimental results of challenging long-range tests show that our enhanced VO is much more accurate and robust than the traditional VO, thanks to the compressive panoramic landmarks built online. 展开更多
关键词 Visual odometry Panoramic landmark Landmark matching Compressed sensing Adaptive compressive feature
原文传递
Robust and efficient edge-based visual odometry 被引量:1
17
作者 Feihu Yan Zhaoxin Li Zhong Zhou 《Computational Visual Media》 SCIE EI CSCD 2022年第3期467-481,共15页
Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite ch... Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite challenging for stateof-the-art approaches to handle low-texture scenes.In this paper,we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose.In contrast to direct methods,we choose reprojection error to construct the optimization energy,which can effectively cope with illumination changes.The distance transform map built upon edge detection for each frame is used to improve tracking efficiency.A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy.Experiments on public datasets show that the method is comparable to stateof-the-art methods in terms of tracking accuracy,while being faster and more robust. 展开更多
关键词 visual odometry(VO) edge structure distance transform low-texture
原文传递
M2C-GVIO:motion manifold constraint aided GNSS-visual-inertial odometry for ground vehicles 被引量:1
18
作者 Tong Hua Ling Pei +3 位作者 Tao Li Jie Yin Guoqing Liu Wenxian Yu 《Satellite Navigation》 EI CSCD 2023年第1期77-91,I0003,共16页
Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.Howeve... Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency. 展开更多
关键词 Sensor fusion Visual-inertial odometry Motion manifold constraint
原文传递
A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes
19
作者 Xianglong Zhang Haiyang Yu Yan Zhuang 《IET Cyber-Systems and Robotics》 EI 2023年第1期79-88,共10页
Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorat... Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results.In order to improve the robustness of visual odometry in dynamic scenes,this paper proposed a dynamic region detection method based on RGBD images.Firstly,all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively.Meanwhile,the depth image is clustered using the K-Means method.The classified feature points are mapped to the clustered depth image,and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points.Subsequently,a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image,and the feature points covered by the mask are all removed.The remaining static feature points are applied to estimate the camera pose.Finally,some experimental results are provided to demonstrate the feasibility and performance. 展开更多
关键词 dynamic indoor scenes moving object detection RGB-D SLAM visual odometry
原文传递
An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid 被引量:7
20
作者 He Zhang Lingqiu Jin Cang Ye 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1389-1400,共12页
There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can ... There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces. 展开更多
关键词 Assistive navigation pose estimation robotic navigation aid(RNA) simultaneous localization and mapping visual-inertial odometry visual positioning system(VPS)
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部