期刊文献+
共找到25篇文章
< 1 2 >
每页显示 20 50 100
Domain-Invariant Similarity Activation Map Contrastive Learning for Retrieval-Based Long-Term Visual Localization 被引量:2
1
作者 Hanjiang Hu Hesheng Wang +1 位作者 Zhe Liu Weidong Chen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第2期313-328,共16页
Visual localization is a crucial component in the application of mobile robot and autonomous driving.Image retrieval is an efficient and effective technique in image-based localization methods.Due to the drastic varia... Visual localization is a crucial component in the application of mobile robot and autonomous driving.Image retrieval is an efficient and effective technique in image-based localization methods.Due to the drastic variability of environmental conditions,e.g.,illumination changes,retrievalbased visual localization is severely affected and becomes a challenging problem.In this work,a general architecture is first formulated probabilistically to extract domain-invariant features through multi-domain image translation.Then,a novel gradientweighted similarity activation mapping loss(Grad-SAM)is incorporated for finer localization with high accuracy.We also propose a new adaptive triplet loss to boost the contrastive learning of the embedding in a self-supervised manner.The final coarse-to-fine image retrieval pipeline is implemented as the sequential combination of models with and without Grad-SAM loss.Extensive experiments have been conducted to validate the effectiveness of the proposed approach on the CMU-Seasons dataset.The strong generalization ability of our approach is verified with the RobotCar dataset using models pre-trained on urban parts of the CMU-Seasons dataset.Our performance is on par with or even outperforms the state-of-the-art image-based localization baselines in medium or high precision,especially under challenging environments with illumination variance,vegetation,and night-time images.Moreover,real-site experiments have been conducted to validate the efficiency and effectiveness of the coarse-to-fine strategy for localization. 展开更多
关键词 Deep representation learning place recognition visual localization
在线阅读 下载PDF
Autonomous map query:robust visual localization in urban environments using Multilayer Feature Graph 被引量:1
2
作者 李海丰 Wang Hongpeng Liu Jingtai 《High Technology Letters》 EI CAS 2015年第1期31-38,共8页
When a vehicle travels in urban areas, onboard global positioning system (GPS) signals may be obstructed by high-rise buildings and thereby cannot provide accurate positions. It is proposed to perform localization b... When a vehicle travels in urban areas, onboard global positioning system (GPS) signals may be obstructed by high-rise buildings and thereby cannot provide accurate positions. It is proposed to perform localization by registering ground images to a 2D building boundary map which is generated from aerial images. Multilayer feature graphs (MFG) is employed to model building facades from the ground images. MFG was reported in the previous work to facilitate the robot scene understand- ing in urhan areas. By constructing MFG, the 2D/3D positions of features can be obtained, inclu- cling line segments, ideal lines, and all primary vertical planes. Finally, a voting-based feature weighted localization method is developed based on MFGs and the 2D building boundary map. The proposed method has been implemented and validated in physical experiments. In the proposed ex- periments, the algorithm has achieved an overall localization accuracy of 2.2m, which is better than commercial GPS working in open environments. 展开更多
关键词 visual localization urban environment multilayer feature graph( MFG) voting- based method
在线阅读 下载PDF
Clustering Reference Images Based on Covisibility for Visual Localization
3
作者 Sangyun Lee Junekoo Kang Hyunki Hong 《Computers, Materials & Continua》 SCIE EI 2023年第5期2705-2725,共21页
In feature-based visual localization for small-scale scenes,local descriptors are used to estimate the camera pose of a query image.For large and ambiguous environments,learning-based hierarchical networks that employ... In feature-based visual localization for small-scale scenes,local descriptors are used to estimate the camera pose of a query image.For large and ambiguous environments,learning-based hierarchical networks that employ local as well as global descriptors to reduce the search space of database images into a smaller set of reference views have been introduced.However,since global descriptors are generated using visual features,reference images with some of these features may be erroneously selected.In order to address this limitation,this paper proposes two clustering methods based on how often features appear as well as their covisibility.For both approaches,the scene is represented by voxels whose size and number are computed according to the size of the scene and the number of available 3Dpoints.In the first approach,a voxel-based histogram representing highly reoccurring scene regions is generated from reference images.A meanshift is then employed to group the most highly reoccurring voxels into place clusters based on their spatial proximity.In the second approach,a graph representing the covisibility-based relationship of voxels is built.Local matching is performed within the reference image clusters,and a perspective-n-point is employed to estimate the camera pose.The experimental results showed that camera pose estimation using the proposed approaches was more accurate than that of previous methods. 展开更多
关键词 visual localization deep learning voxel representation CLUSTERING covisibility MEANSHIFT graph structure
在线阅读 下载PDF
Method for Visual Localization of Oil and Gas Wellhead Based on Distance Function of Projected Features
4
作者 Ying Xie Xiang-Dong Yang +2 位作者 Zhi Liu Shu-Nan Ren Ken Chen 《International Journal of Automation and computing》 EI CSCD 2017年第2期147-158,共12页
A localization method based on distance function of projected features is presented to solve the accuracy reduction or failure problem due to occlusion and blurring caused by smog, when dealing with vision based local... A localization method based on distance function of projected features is presented to solve the accuracy reduction or failure problem due to occlusion and blurring caused by smog, when dealing with vision based localization for target oil and gas wellhead (OGWH). Firstly, the target OGWH is modeled as a cylinder with marker, and a vector with redundant parameter is used to describe its pose. Secondly, the explicit mapping relationship between the pose vector with redundant parameter and projected features is derived. Then, a 2D-point-to-feature distance function is proposed, as well as its derivative. Finally, based on this distance function and its derivative, an algorithm is proposed to estimate the pose of target OGWH directly according to the 2D image information, and the validity of the method is verified by both synthetic data and real image experiments. The results show that this method is able to accomplish the localization in the case of occlusion and blurring, and its anti-noise ability is good especially with noise ratio of less than 70%. 展开更多
关键词 Robot vision visual localization 3D object localization model based pose estimation distance function of projectedfeatures nonlinear least squares random sample consensus (RANSAC).
原文传递
APM-SLAM:Visual localization for fixed routes with tightly coupled a priori map
5
作者 Linsong Xue Qi Luo Kai Zhang 《Journal of Intelligent and Connected Vehicles》 2025年第2期55-69,共15页
Localization along fixed routes is the fundamental function of transportation applications,including patrol vehicles,shuttles,buses,and even passenger vehicles.To achieve accurate and reliable localization,we propose ... Localization along fixed routes is the fundamental function of transportation applications,including patrol vehicles,shuttles,buses,and even passenger vehicles.To achieve accurate and reliable localization,we propose a tightly coupled A Priori Map Simultaneous Localization and Mapping(APM-SLAM)system.APM-SLAM provides a comprehensive and heterogeneous framework,encompassing both mapping and localization processes.The mapping stage leverages Global Navigation Satellite System(GNSS)-aided Structure from Motion(SfM)to establish reliable a priori maps with coarse-and fine-level components.The localization process integrates coarse-to-fine matching with Maximum A Posteriori(MAP)Probability estimation to refine pose accuracy.By incorporating deep learning-based features and point descriptors,our system maintains robustness even in scenarios with significant visual variation.Unlike traditional map-based approaches,APM-SLAM models the a priori map’s point structures as probabilistic distributions and incorporates them into the optimization process.Extensive experiments on public datasets demonstrate the superiority of our method in both mapping precision and localization accuracy,achieving decimeter-level translation precision.Ablation studies further validate the effectiveness of each component within our system.This work contributes to establishing maps and utilizing a priori information for localization simultaneously. 展开更多
关键词 visual localization state estimation intelligent vehicle pre-built map visual simultaneous localization and mapping(SLAM)
在线阅读 下载PDF
ReLoc:Indoor Visual Localization with Hierarchical Sitemap and View Synthesis 被引量:3
6
作者 Hui-Xuan Wang Jing-Liang Peng +3 位作者 Shi-Yi Lu Xin Cao Xue-Ying Qin Chang-He Tu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第3期494-507,共14页
Indoor visual localization,i.e.,6 Degree-of-Freedom camera pose estimation for a query image with respect to a known scene,is gaining increased attention driven by rapid progress of applications such as robotics and a... Indoor visual localization,i.e.,6 Degree-of-Freedom camera pose estimation for a query image with respect to a known scene,is gaining increased attention driven by rapid progress of applications such as robotics and augmented reality.However,drastic visual discrepancies between an onsite query image and prerecorded indoor images cast a significant challenge for visual localization.In this paper,based on the key observation of the constant existence of planar surfaces such as floors or walls in indoor scenes,we propose a novel system incorporating geometric information to address issues using only pixelated images.Through the system implementation,we contribute a hierarchical structure consisting of pre-scanned images and point cloud,as well as a distilled representation of the planar-element layout extracted from the original dataset.A view synthesis procedure is designed to generate synthetic images as complementary to that of a sparsely sampled dataset.Moreover,a global image descriptor based on the image statistic modality,called block mean,variance,and color(BMVC),was employed to speed up the candidate pose identification incorporated with a traditional convolutional neural network(CNN)descriptor.Experimental results on a popular benchmark demonstrate that the proposed method outperforms the state-of-the-art approaches in terms of visual localization validity and accuracy. 展开更多
关键词 visual localization planar surface statistic information view synthesis
原文传递
A high precision visual localization sensor and its working methodology for an indoor mobile robot 被引量:2
7
作者 Feng-yu ZHOU Xian-feng YUAN +2 位作者 Yang YANG Zhi-fei JIANG Chen-lei ZHOU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2016年第4期365-374,共10页
To overcome the shortcomings of existing robot localization sensors, such as low accuracy and poor robustness, a high precision visual localization system based on infrared-reflective artificial markers is designed an... To overcome the shortcomings of existing robot localization sensors, such as low accuracy and poor robustness, a high precision visual localization system based on infrared-reflective artificial markers is designed and illustrated in detail in this paper. First, the hardware system of the localization sensor is developed. Secondly, we design a novel kind of infrared-reflective artificial marker whose characteristics can be extracted by the acquisition and processing of the infrared image. In addition, a confidence calculation method for marker identification is proposed to obtain the probabilistic localization results. Finally, the autonomous localization of the robot is achieved by calculating the relative pose relation between the robot and the artificial marker based on the perspective-3-point(P3P) visual localization algorithm. Numerous experiments and practical applications show that the designed localization sensor system is immune to the interferences of the illumination and observation angle changes. The precision of the sensor is ±1.94 cm for position localization and ±1.64° for angle localization. Therefore, it satisfies perfectly the requirements of localization precision for an indoor mobile robot. 展开更多
关键词 Mobile robot localization sensor visual localization Infrared-reflective marker Embedded system
原文传递
Scene Visual Perception and AR Navigation Applications 被引量:2
8
作者 LU Ping SHENG Bin +1 位作者 SHI Wenzhe 《ZTE Communications》 2023年第1期81-88,共8页
With the rapid popularization of mobile devices and the wide application of various sensors,scene perception methods applied to mobile devices occupy an important position in location-based services such as navigation... With the rapid popularization of mobile devices and the wide application of various sensors,scene perception methods applied to mobile devices occupy an important position in location-based services such as navigation and augmented reality(AR).The development of deep learning technologies has greatly improved the visual perception ability of machines to scenes.The basic framework of scene visual perception,related technologies and the specific process applied to AR navigation are introduced,and future technology development is proposed.An application(APP)is designed to improve the application effect of AR navigation.The APP includes three modules:navigation map generation,cloud navigation algorithm,and client design.The navigation map generation tool works offline.The cloud saves the navigation map and provides navigation algorithms for the terminal.The terminal realizes local real-time positioning and AR path rendering. 展开更多
关键词 3D reconstruction image matching visual localization AR navigation deep learning
在线阅读 下载PDF
Research on Visual Autonomous Navigation Indoor for Unmanned Aerial Vehicle
9
作者 张洋 吕强 +1 位作者 林辉灿 马建业 《Journal of Shanghai Jiaotong university(Science)》 EI 2017年第2期252-256,共5页
The aim of this paper is to study visual autonomous navigation of unmanned aerial vehicle (UAV) in indoor global positioning system (GPS) denied environment. The UAV platform of the autonomous navigation flight contro... The aim of this paper is to study visual autonomous navigation of unmanned aerial vehicle (UAV) in indoor global positioning system (GPS) denied environment. The UAV platform of the autonomous navigation flight control system is designed and built. The principle of visual localization and mapping algorithm is studied. According to the characteristics of UAV platform, the visual localization is designed and improved. Experimental results demonstrate that the UAV platform can realize the tasks of autonomous localization, navigation and mapping based on visual in unknown environments. © 2017, Shanghai Jiaotong University and Springer-Verlag Berlin Heidelberg. 展开更多
关键词 unmanned aerial vehicle(UAV) visual localization autonomous navigation robot operating system
原文传递
Bearing-only Visual SLAM for Small Unmanned Aerial Vehicles in GPS-denied Environments 被引量:7
10
作者 Chao-Lei Wang Tian-Miao Wang +2 位作者 Jian-Hong Liang Yi-Cheng Zhang Yi Zhou 《International Journal of Automation and computing》 EI CSCD 2013年第5期387-396,共10页
This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observati... This paper presents a hierarchical simultaneous localization and mapping(SLAM) system for a small unmanned aerial vehicle(UAV) using the output of an inertial measurement unit(IMU) and the bearing-only observations from an onboard monocular camera.A homography based approach is used to calculate the motion of the vehicle in 6 degrees of freedom by image feature match.This visual measurement is fused with the inertial outputs by an indirect extended Kalman filter(EKF) for attitude and velocity estimation.Then,another EKF is employed to estimate the position of the vehicle and the locations of the features in the map.Both simulations and experiments are carried out to test the performance of the proposed system.The result of the comparison with the referential global positioning system/inertial navigation system(GPS/INS) navigation indicates that the proposed SLAM can provide reliable and stable state estimation for small UAVs in GPS-denied environments. 展开更多
关键词 visual simultaneous localization and mapping(SLAM) bearing-only observation inertial measurement unit small unmanned aerial vehicles(UAVs) GPS-denied environment
原文传递
Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision 被引量:2
11
作者 Han Wang Wei Mou +3 位作者 Gerald Seet Mao-Hai Li M.W.S.Lau Dan-Wei Wang 《International Journal of Automation and computing》 EI CSCD 2013年第5期397-404,共8页
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error acc... In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM). 展开更多
关键词 visual odometry ego-motion principal direction ceiling vision simultaneous localization and mapping(SLAM)
原文传递
High dimension feature extraction based visualized SOM fault diagnosis method and its application in p-xylene oxidation process 被引量:1
12
作者 田颖 杜文莉 钱锋 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2015年第9期1509-1517,共9页
Purified terephthalic acid(PTA) is an important chemical raw material. P-xylene(PX) is transformed to terephthalic acid(TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a co... Purified terephthalic acid(PTA) is an important chemical raw material. P-xylene(PX) is transformed to terephthalic acid(TA) through oxidation process and TA is refined to produce PTA. The PX oxidation reaction is a complex process involving three-phase reaction of gas, liquid and solid. To monitor the process and to improve the product quality, as well as to visualize the fault type clearly, a fault diagnosis method based on selforganizing map(SOM) and high dimensional feature extraction method, local tangent space alignment(LTSA),is proposed. In this method, LTSA can reduce the dimension and keep the topology information simultaneously,and SOM distinguishes various states on the output map. Monitoring results of PX oxidation reaction process indicate that the LTSA–SOM can well detect and visualize the fault type. 展开更多
关键词 Self-organizing map Local tangent space alignment Fault diagnosis visualization P-xylene oxidation
在线阅读 下载PDF
基于深度学习的室内动态场景下视觉SLAM技术研究 被引量:2
13
作者 郑晓华 耿鑫雷 邓浩坤 《测绘地理信息》 CSCD 2024年第2期51-55,共5页
视觉同步定位与建图(visual simultaneous localization and mapping,VSLAM)技术是近年来机器人和计算机视觉领域的重点研究方向之一,但当前的主流算法主要面向静态环境,当场景中存在运动的物体时,算法的定位精度和稳定性会受到很大影... 视觉同步定位与建图(visual simultaneous localization and mapping,VSLAM)技术是近年来机器人和计算机视觉领域的重点研究方向之一,但当前的主流算法主要面向静态环境,当场景中存在运动的物体时,算法的定位精度和稳定性会受到很大影响。为了解决上述问题,提出了一种惯性测量单元(inertial measurement unit,IMU)积分与YOLOv4语义分割结合的VSLAM前端动态特征点剔除算法,通过YOLOv4网络对图像进行语义分割,识别图像中有运动可能的物体;再将IMU积分与语义分割结合,对目标检测框内有运动可能的特征点进行重投影误差的解算,识别并剔除环境中运动的特征点。在TUM Visual-Inertial Dataset上验证该算法,结果表明,在包含运动物体的室内场景下,该算法可以有效剔除环境中的运动物体,显著提升SLAM系统的定位精度和稳定性。 展开更多
关键词 视觉同步定位与建图(visual simultaneous localization and mapping VSLAM) 特征点 动态目标 深度学习
原文传递
Multi-task learning andjoint refinement between camera localization and object detection
14
作者 Junyi Wang Yue Qi 《Computational Visual Media》 SCIE EI CSCD 2024年第5期993-1011,共19页
Visual localization and object detection both play important roles in various tasks.In many indoor application scenarios where some detected objects have fixed positions,the two techniques work closely together.Howeve... Visual localization and object detection both play important roles in various tasks.In many indoor application scenarios where some detected objects have fixed positions,the two techniques work closely together.However,few researchers consider these two tasks simultaneously,because of a lack of datasets and the little attention paid to such environments.In this paper,we explore multi-task network design and joint refinement of detection and localization.To address the dataset problem,we construct a medium indoor scene of an aviation exhibition hall through a semi-automatic process.The dataset provides localization and detection information,and is publicly available at https://drive.google.com/drive/folders/1U28zk0N4_I0db zkqyIAK1A15k9oUKOjI?usp=sharing for benchmarking localization and object detection tasks.Targeting this dataset,we have designed a multi-task network,JLDNet,based on YOLO v3,that outputs a target point cloud and object bounding boxes.For dynamic environments,the detection branch also promotes the perception of dynamics.JLDNet includes image feature learning,point feature learning,feature fusion,detection construction,and point cloud regression.Moreover,object-level bundle adjustment is used to further improve localization and detection accuracy.To test JLDNet and compare it to other methods,we have conducted experiments on 7 static scenes,our constructed dataset,and the dynamic TUM RGB-D and Bonn datasets.Our results show state-of-the-art accuracy for both tasks,and the benefit of jointly working on both tasks is demonstrated. 展开更多
关键词 visual localization object detection joint optimization multi-task learning
原文传递
Improved vision-only localization method for mobile robots in indoor environments
15
作者 Gang Huang Liangzhu Lu +2 位作者 Yifan Zhang Gangfu Cao Zhe Zhou 《Autonomous Intelligent Systems》 2024年第1期153-165,共13页
To solve the problem of mobile robots needing to adjust their pose for accurate operation after reaching the target point in the indoor environment,a localization method based on scene modeling and recognition has bee... To solve the problem of mobile robots needing to adjust their pose for accurate operation after reaching the target point in the indoor environment,a localization method based on scene modeling and recognition has been designed.Firstly,the offline scene model is created by both handcrafted feature and semantic feature.Then,the scene recognition and location calculation are performed online based on the offline scene model.To improve the accuracy of recognition and location calculation,this paper proposes a method that integrates both semantic features matching and handcrafted features matching.Based on the results of scene recognition,the accurate location is obtained through metric calculation with 3D information.The experimental results show that the accuracy of scene recognition is over 90%,and the average localization error is less than 1 meter.Experimental results demonstrate that the localization has a better performance after using the proposed improved method. 展开更多
关键词 Deep learning Mobile robot Scene recognition visual localization
原文传递
A New Monocular Vision Measurement Method to Estimate 3D Positions of Objects on Floor 被引量:3
16
作者 Ling-Yi Xu Zhi-Qiang Cao +1 位作者 Peng Zhao Chao Zhou 《International Journal of Automation and computing》 EI CSCD 2017年第2期159-168,共10页
A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the... A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera's extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera's extrinsic parameters after the camera's intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method. 展开更多
关键词 visual measurement calibration localization position estimation monocular vision.
原文传递
A monocular visual SLAM system augmented by lightweight deep local feature extractor using in-house and low-cost LIDAR-camera integrated device
17
作者 Jing Li Chenhui Shi +4 位作者 Jun Chen Ruisheng Wang Zhiyuan Yang Fan Zhang Jianhua Gong 《International Journal of Digital Earth》 SCIE EI 2022年第1期1929-1946,共18页
Simultaneous Localization and Mapping(SLAM)has been widely used in emergency response,self-driving and city-scale 3D mapping and navigation.Recent deep-learning based feature point extractors have demonstrated superio... Simultaneous Localization and Mapping(SLAM)has been widely used in emergency response,self-driving and city-scale 3D mapping and navigation.Recent deep-learning based feature point extractors have demonstrated superior performance in dealing with the complex environmental challenges(e.g.extreme lighting)while the traditional extractors are struggling.In this paper,we have successfully improved the robustness and accuracy of a monocular visual SLAM system under various complex scenes by adding a deep learning based visual localization thread as an augmentation to the visual SLAM framework.In this thread,our feature extractor with an efficient lightweight deep neural network is used for absolute pose and scale estimation in real time using the highly accurate georeferenced prior map database at 20cm geometric accuracy created by our in-house and low-cost LiDAR and camera integrated device.The closed-loop error provided by our SLAM system with and without this enhancement is 1.03m and 18.28m respectively.The scale estimation of the monocular visual SLAM is also significantly improved(0.01 versus 0.98).In addition,a novel camera-LiDAR calibration workflow is also provided for large-scale 3D mapping.This paper demonstrates the application and research potential of deep-learning based vision SLAM with image and LiDAR sensors. 展开更多
关键词 Deep local features lightweight network visual localization SLAM LIDAR
原文传递
Evaluation of a Pointwise Local Visual Pattern Exploration Method
18
作者 Matthew O.Ward Elke A.Rundensteiner Carolina Ruiz 《Tsinghua Science and Technology》 SCIE EI CAS 2012年第4期429-439,共11页
Sensitivity analysis is a powerful method for discovering the significant factors that contribute to understanding the interaction between variables in multivariate datasets. A number of sensitivity analysis methods f... Sensitivity analysis is a powerful method for discovering the significant factors that contribute to understanding the interaction between variables in multivariate datasets. A number of sensitivity analysis methods fall into the class of local analysis, in which the sensitivity is defined as the partial derivatives of a target variable with respect to a group of independent variables. In a recent paper, we presented a novel pointwise local pattern exploration system for visual sensitivity analysis. Using this system, analysts are able to explore local patterns and the sensitivity at individual data points, which reveals the relationships between a focal point and its neighbors this paper we present several evaluations of the system, including case studies with real datasets, user studies the effectiveness of the visualizations and interactions, and a detailed description of the experience of a user In on 展开更多
关键词 knowledge discovery sensitivity analysis local pattern visualization EVALUATION
原文传递
RB-SLAM:visual SLAM based on rotated BEBLID feature point description
19
作者 Fan Xinyue Wu Kai Chen Shuai 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2023年第3期1-13,共13页
The extraction and description of image features are very important for visual simultaneous localization and mapping(V-SLAM).A rotated boosted efficient binary local image descriptor(BEBLID)SLAM(RB-SLAM)algorithm base... The extraction and description of image features are very important for visual simultaneous localization and mapping(V-SLAM).A rotated boosted efficient binary local image descriptor(BEBLID)SLAM(RB-SLAM)algorithm based on improved oriented fast and rotated brief(ORB)feature description is proposed in this paper,which can solve the problems of low localization accuracy and time efficiency of the current ORB-SLAM3 algorithm.Firstly,it uses the BEBLID to replace the feature point description algorithm of the original ORB to enhance the expressiveness and description efficiency of the image.Secondly,it adds rotational invariance to the BEBLID using the orientation information of the feature points.It also selects the rotationally stable bits in the BEBLID to further enhance the rotational invariance of the BEBLID.Finally,it retrains the binary visual dictionary based on the BEBLID to reduce the cumulative error of V-SLAM and improve the loading speed of the visual dictionary.Experiments show that the dictionary loading efficiency is improved by more than 10 times.The RB-SLAM algorithm improves the trajectory accuracy by 24.75%on the TUM dataset and 26.25%on the EuRoC dataset compared to the ORB-SLAM3 algorithm. 展开更多
关键词 visual simultaneous localization and mapping(V-SLAM) oriented fast and rotated brief(ORB) feature extraction boosted efficient binary local image descriptor(BEBLID) rotational invariance
原文传递
FilterGNN:Image feature matching with cascaded outlier filters and linearattention
20
作者 Jun-Xiong Cai Tai-Jiang Mu Yu-Kun Lai 《Computational Visual Media》 SCIE EI CSCD 2024年第5期873-884,共12页
The cross-view matching of local image features is a fundamental task in visual localization and 3D reconstruction.This study proposes FilterGNN,a transformer-based graph neural network(GNN),aiming to improve the matc... The cross-view matching of local image features is a fundamental task in visual localization and 3D reconstruction.This study proposes FilterGNN,a transformer-based graph neural network(GNN),aiming to improve the matching efficiency and accuracy of visual descriptors.Based on high matching sparseness and coarse-to-fine covisible area detection,FilterGNN utilizes cascaded optimal graph-matching filter modules to dynamically reject outlier matches.Moreover,we successfully adapted linear attention in FilterGNN with post-instance normalization support,which significantly reduces the complexity of complete graph learning from O(N2)to O(N).Experiments show that FilterGNN requires only 6%of the time cost and 33.3%of the memory cost compared with SuperGlue under a large-scale input size and achieves a competitive performance in various tasks,such as pose estimation,visual localization,and sparse 3D reconstruction. 展开更多
关键词 image matching TRANSFORMER linear attention visual localization sparse reconstruction
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部