期刊文献+
共找到377篇文章
< 1 2 19 >
每页显示 20 50 100
Imaging simulation and analysis of attitude jitter effect on topographic mapping for lunar orbiter stereo optical cameras 被引量:1
1
作者 CHEN Chen TONG Xiao-Hua +4 位作者 LIU Shi-Jie YE Zhen HUANG Chao-Wei WU Hao ZHANG Han 《红外与毫米波学报》 SCIE EI CAS CSCD 北大核心 2024年第5期722-730,共9页
The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo m... The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison. 展开更多
关键词 topographic mapping lunar orbiter stereo camera attitude jitter imaging simulation digital elevation model
在线阅读 下载PDF
Bio-inspired Vision Mapping and Localization Method Based on Reprojection Error Optimization and Asynchronous Kalman Fusion
2
作者 Shijie Zhang Tao Tang +3 位作者 Taogang Hou Yuxuan Huang Xuan Pei Tianmiao Wang 《Chinese Journal of Mechanical Engineering》 2025年第4期266-281,共16页
Bio-inspired visual systems have garnered significant attention in robotics owing to their energy efficiency,rapid dynamic response,and environmental adaptability.Among these,event cameras-bio-inspired sensors that as... Bio-inspired visual systems have garnered significant attention in robotics owing to their energy efficiency,rapid dynamic response,and environmental adaptability.Among these,event cameras-bio-inspired sensors that asynchronously report pixel-level brightness changes called’events’,stand out because of their ability to capture dynamic changes with minimal energy consumption,making them suitable for challenging conditions,such as low light or high-speed motion.However,current mapping and localization methods for event cameras depend primarily on point and line features,which struggle in sparse or low-feature environments and are unsuitable for static or slow-motion scenarios.We addressed these challenges by proposing a bio-inspired vision mapping and localization method using active LED markers(ALMs)combined with reprojection error optimization and asynchronous Kalman fusion.Our approach replaces traditional features with ALMs,thereby enabling accurate tracking under dynamic and low-feature conditions.The global mapping accuracy significantly improved by minimizing the reprojection error,with corner errors reduced from 16.8 cm to 3.1 cm after 400 iterations.The asynchronous Kalman fusion of multiple camera pose estimations from ALMs ensures precise localization with a high temporal efficiency.This method achieved a mean translation error of 0.078 m and a rotational error of 5.411°while evaluating dynamic motion.In addition,the method supported an output rate of 4.5 kHz while maintaining high localization accuracy in UAV spiral flight experiments.These results demonstrate the potential of the proposed approach for real-time robot localization in challenging environments. 展开更多
关键词 Bio-inspired vision Event camera mapping LOCALIZATION
在线阅读 下载PDF
Image Motion Compensation of Off-Axis TMA Three-Line ArrayAerospace Mapping Cameras
3
作者 Yongchang Li Pengluo Lu +2 位作者 Longxu Jin Guoning Li Yinan Wu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2016年第6期80-89,共10页
To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. F... To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras. 展开更多
关键词 three-line array mapping camera off-axis TMA image motion compensation image motion velocity drift angle
在线阅读 下载PDF
Rapid Texture Mapping from Image Sequences for Building Geometry Model 被引量:1
4
作者 ZHANG Zuxun WU Jun ZHANG Jianqing 《Geo-Spatial Information Science》 2003年第3期8-15,31,共9页
An effective approach,mapping the texture for building model based on the digital photogrammetric theory,is proposed.The easily-acquired image sequences from digital video camera on helicopter are used as texture reso... An effective approach,mapping the texture for building model based on the digital photogrammetric theory,is proposed.The easily-acquired image sequences from digital video camera on helicopter are used as texture resource,and the correspondence between the space edge in building geometry model and its line feature in image sequences is determined semi-automatically.The experimental results in production of three-dimensional data for car navigation show us an attractive future both in efficiency and effect. 展开更多
关键词 image sequences video camera on helicopter 3D reconstruction texture mapping digital photogrammetry
在线阅读 下载PDF
Composite-mask GAN based on refined optical flow and disparity map for SLAM visual odometry
5
作者 JI Yuehui JIANG Jingwei +2 位作者 LIU Junjie SONG Yu GAO Qiang 《Optoelectronics Letters》 2025年第12期730-736,共7页
Although deep learning methods have been widely applied in slam visual odometry(VO)over the past decade with impressive improvements,the accuracy remains limited in complex dynamic environments.In this paper,a composi... Although deep learning methods have been widely applied in slam visual odometry(VO)over the past decade with impressive improvements,the accuracy remains limited in complex dynamic environments.In this paper,a composite mask-based generative adversarial network(CMGAN)is introduced to predict camera motion and binocular depth maps.Specifically,a perceptual generator is constructed to obtain the corresponding parallax map and optical flow between two neighboring frames.Then,an iterative pose improvement strategy is proposed to improve the accuracy of pose estimation.Finally,a composite mask is embedded in the discriminator to sense structural deformation in the synthesized virtual image,thereby increasing the overall structural constraints of the network model,improving the accuracy of camera pose estimation,and reducing drift issues in the VO.Detailed quantitative and qualitative evaluations on the KITTI dataset show that the proposed framework outperforms existing conventional,supervised learning and unsupervised depth VO methods,providing better results in both pose estimation and depth estimation. 展开更多
关键词 parallax map predict camera motion binocular depth mapsspecificallya slam visual odometry vo complex dynamic environmentsin deep learning methods generative adversarial network perceptual generator iterative pose improvement strateg
原文传递
基于Azure Kinect DK深度相机的多场景实时稠密地图构建方法
6
作者 宣婧婧 刘波 +2 位作者 刘华 陈乾 刘媛媛 《江西科学》 2026年第1期118-125,共8页
为了提升搭载深度相机的ORB-SLAM3系统实时稠密地图的构建能力,以Azure Kinect DK深度相机作为ORB-SLAM3系统的前端传感器,提出一种基于Azure Kinect DK深度相机的多场景实时稠密地图构建方法。该方法设计与编写了基于Azure Kinect DK... 为了提升搭载深度相机的ORB-SLAM3系统实时稠密地图的构建能力,以Azure Kinect DK深度相机作为ORB-SLAM3系统的前端传感器,提出一种基于Azure Kinect DK深度相机的多场景实时稠密地图构建方法。该方法设计与编写了基于Azure Kinect DK深度相机的ROS接口文件,新增了稠密点云构建模块,实现了多场景实时稠密地图的构建与直观展示。实验结果表明,在有回环场景下,相较于RGB-D地图构建模式,在3个不同场景所构建稠密地图在长度测量的平均均方根误差从0.189降低至0.084,角度测量的平均均方根误差从2.40降低至0.68。在无回环场景下,相较于RGB-D无回环地图构建模式,在3个不同场景所构建稠密地图在长度测量的平均均方根误差从0.309降低至0.187。 展开更多
关键词 Azure Kinect DK深度相机 多场景 稠密地图 构建方法
在线阅读 下载PDF
相机-线激光联合标定的点云彩色纹理映射方法
7
作者 蒲怀安 季彦均 +2 位作者 唐进元 陈龙庭 宋碧芸 《仪器仪表学报》 北大核心 2026年第1期287-299,共13页
针对工业三维彩色重建中相机-线激光联合标定高精度依赖复杂高精密靶标的问题,提出了一套基于多特征、弱约束标定块的高精度标定方案,整体由多模态特征提取与配准框架以及标定模型的两阶段优化求解构成。首先,在标定块上引入圆孔中心并... 针对工业三维彩色重建中相机-线激光联合标定高精度依赖复杂高精密靶标的问题,提出了一套基于多特征、弱约束标定块的高精度标定方案,整体由多模态特征提取与配准框架以及标定模型的两阶段优化求解构成。首先,在标定块上引入圆孔中心并与角点协同检测:角点采用几何约束实现亚像素定位,圆心采用两阶段椭圆拟合实现精定位;随后,提出姿态自适应投影下的三维特征点重建方法,按降维—检测—提维流程将三维定位转化为二维检测并反投影重建三维点云,提升对噪声与姿态变化的鲁棒性;最后,结合几何先验实现2D-3D特征点的无歧义配准。参数求解采用线性拆解-非线性重构的两阶段优化:在单帧条件下由特征点对线性估计映射矩阵初值,经过三角正交分解分离内外参数后,引入镜头畸变参数进行全局非线性精化,提升解的全局最优性与泛化能力。实验结果表明:归一化平均重投影误差为0.84 pixels,对应物理距离误差为0.0194 mm;相较基准方法,两项误差指标分别降低约65%与61%;在3种光照条件下标定结果一致、误差波动小,表明方法具有较强鲁棒性。同时消融实验证明了圆心特征在透视变换下的定位稳定性显著优于角点。在齿面彩色重建任务中,基于所获映射矩阵的点云彩色纹理映射可高保真复现齿面微观印痕与划痕,验证了工程可用性。 展开更多
关键词 相机-线激光 联合标定 弱约束标定块 姿态自适应投影 两阶段优化 彩色纹理映射
原文传递
基于AprilTag的隧道多相机系统几何标定与优化
8
作者 姜莹 傅梦麒 +1 位作者 闻平 胡翰 《测绘工程》 2026年第2期23-33,39,共12页
在隧道衬砌病害检测中,为全面获取大视场高分辨率影像,通常采用多相机系统对隧道断面进行分区成像,实现对衬砌表面的完整覆盖。然而,衬砌表面水泥质地表面纹理特征单一,相邻相机视场重叠区域小,导致传统标定方法在低重叠、弱特征场景下... 在隧道衬砌病害检测中,为全面获取大视场高分辨率影像,通常采用多相机系统对隧道断面进行分区成像,实现对衬砌表面的完整覆盖。然而,衬砌表面水泥质地表面纹理特征单一,相邻相机视场重叠区域小,导致传统标定方法在低重叠、弱特征场景下稳定性差,难以满足高精度检校需求。针对上述问题,文中提出一种基于视觉标识的多相机内外参高精度联合标定方法,通过构建包含AprilTag控制点的检校环境,建立影像与空间控制坐标系之间的几何对应关系,基于多相机冗余观测数据,构建带有空间约束的联合优化模型,在全局范围内求解多相机外参,统一恢复各相机在同一感知坐标系下的空间位置关系。相比传统的两两标定方法,本方法有效克服低重叠区域不稳定标定误差积累的问题,提升外参估计的整体一致性和鲁棒性。通过对比实验以及真实隧道场景下的数据验证,结果表明该方法能够准确恢复多相机外参数据,为隧道衬砌的完整影像拼接与空间重建提供高精度几何基础。 展开更多
关键词 隧道检测系统 多相机标定 空间约束 参数优化
在线阅读 下载PDF
基于摄像标准带的数字孪生场景映射校正方法
9
作者 吴志铭 方静雯 李欣伟 《厦门理工学院学报》 2026年第1期49-58,共10页
施工作业现场是具有高度复杂性与动态性的场景,为降低数字孪生场景中因摄像头畸变造成的映射误差,提出基于摄像标准带的数字孪生场景映射校正方法,通过对摄像头所采集的目标空间场景进行校正,以提供高精度的数字孪生场景条件。该方法先... 施工作业现场是具有高度复杂性与动态性的场景,为降低数字孪生场景中因摄像头畸变造成的映射误差,提出基于摄像标准带的数字孪生场景映射校正方法,通过对摄像头所采集的目标空间场景进行校正,以提供高精度的数字孪生场景条件。该方法先利用摄像头成像原理计算出目标空间场景畸变影响最小的特定范围,并动态调整摄像头的俯仰角,形成标准带区域;再通过透视投影将目标空间场景的二维像素坐标转换为三维坐标;最后采用基于Procrustes分析的相邻点位纠偏算法,对转换后的目标空间场景坐标进行校正。实验结果表明,经该方法纠偏校正后,目标空间场景的点位坐标在纵轴(X)方向上的误差改进率超过99.17%,横轴(Z)方向超过94.79%,有效降低了摄像头畸变对空间场景映射精度的影响,提高了空间场景坐标计算的准确性。 展开更多
关键词 数字孪生 摄像标准带 映射校正 透视投影 Procrustes分析 施工场景动态监测
在线阅读 下载PDF
Non-line-of-sight imaging via scalable scattering mapping using TOF cameras
10
作者 YUJIE FANG JUNMING WU +5 位作者 SHENGMING ZHONG XIAOFENG ZHANG YULEI AN XIA WANG BINGHUA SU KEJUN WANG 《Photonics Research》 2025年第8期2172-2183,共12页
The technique of imaging or tracking objects outside the field of view(FOV)through a reflective relay surface,usually called non-line-of-sight(NLOS)imaging,has been a popular research topic in recent years.Although NL... The technique of imaging or tracking objects outside the field of view(FOV)through a reflective relay surface,usually called non-line-of-sight(NLOS)imaging,has been a popular research topic in recent years.Although NLOS imaging can be achieved through methods such as detector design,optical path inverse operation algorithm design,or deep learning,challenges such as high costs,complex algorithms,and poor results remain.This study introduces a simple algorithm-based rapid depth imaging device,namely,the continuous-wave time-offlight range imaging camera(CW-TOF camera),to address the decoupled imaging challenge of differential scattering characteristics in an object-relay surface by quantifying the differential scattering signatures through statistical analysis of light propagation paths.A scalable scattering mapping(SSM)theory has been proposed to explain the degradation process of clear images.High-quality NLOS object 3D imaging has been achieved through a data-driven approach.To verify the effectiveness of the proposed algorithm,experiments were conducted using an optical platform and real-world scenarios.The objects on the optical platform include plaster sculptures and plastic letters,while relay surfaces consist of polypropylene(PP)plastic boards,acrylic boards,and standard Lambertian diffusers.In real-world scenarios,the object is clothing,with relay surfaces including painted doors and white plaster walls.Imaging data were collected for different combinations of objects and relay surfaces for training and testing,totaling 210,000 depth images.The reconstruction of NLOS images in the laboratory and real-world is excellent according to subjective evaluation;thus,our approach can realize NLOS imaging in harsh natural scenes and advances the practical application of NLOS imaging. 展开更多
关键词 Scalable Scattering mapping detector designoptical path inverse operation algorithm designor Continuous Wave Time Flight camera Non Line Sight Imaging deep learningchallenges tracking objects nlos imaging Statistical Analysis
原文传递
A Sensor-based SLAM Algorithm for Camera Tracking in Virtual Studio 被引量:1
11
作者 Mansour Moniri Claude C.Chibelushi 《International Journal of Automation and computing》 EI 2008年第2期152-162,共11页
This paper addresses a sensor-based simultaneous localization and mapping (SLAM) algorithm for camera tracking in a virtual studio environment. The traditional camera tracking methods in virtual studios are vision-b... This paper addresses a sensor-based simultaneous localization and mapping (SLAM) algorithm for camera tracking in a virtual studio environment. The traditional camera tracking methods in virtual studios are vision-based or sensor-based. However, the chroma keying process in virtual studios requires color cues, such as blue background, to segment foreground objects to be inserted into images and videos. Chroma keying limits the application of vision-based tracking methods in virtual studios since the background cannot provide enough feature information. Furthermore, the conventional sensor-based tracking approaches suffer from the jitter, drift or expensive computation due to the characteristics of individual sensor system. Therefore, the SLAM techniques from the mobile robot area are first investigated and adapted to the camera tracking area. Then, a sensor-based SLAM extension algorithm for two dimensional (2D) camera tracking in virtual studio is described. Also, a technique called map adjustment is proposed to increase the accuracy' and efficiency of the algorithm. The feasibility and robustness of the algorithm is shown by experiments. The simulation results demonstrate that the sensor-based SLAM algorithm can satisfy the fundamental 2D camera tracking requirement in virtual studio environment. 展开更多
关键词 Simultaneous localization and mapping (SLAM) particle filter chroma key camera tracking
在线阅读 下载PDF
基于优化RTAB-Map的室内巡检机器人视觉导航方法 被引量:3
12
作者 周加超 葛动元 +1 位作者 丛佩超 吕昆峰 《广西科技大学学报》 2023年第1期79-84,共6页
巡检机器人对室内场景进行自主导航监测时,采用视觉同时定位与地图构建(simultaneous localization and mapping,SLAM)方法构建的三维深度地图存在实时性不高、定位精度下降的问题。对此,提出了一种基于RGB-D相机和优化RTAB-Map(real ti... 巡检机器人对室内场景进行自主导航监测时,采用视觉同时定位与地图构建(simultaneous localization and mapping,SLAM)方法构建的三维深度地图存在实时性不高、定位精度下降的问题。对此,提出了一种基于RGB-D相机和优化RTAB-Map(real time appearance based mapping)算法的巡检机器人视觉导航方法。首先,通过重新配置RTAB-Map点云更新频率,实现算法优化,构建稠密的点云地图后;采用启发式A*算法、动态窗口法(dynamic window approach,DWA)分别制定全局与局部巡检路径,通过自适应蒙特卡罗定位(adaptive Monte Carlo localization,AMCL)方法更新机器人的实时位姿信息,再将搭建好的实体巡检机器人在软件、硬件平台上完成视觉导航测试实验。结果表明:优化后的RTAB-Map算法运行时的内存占比稍有增加,但获得与真实环境一致性更高的三维深度地图,在一定程度上提高视觉导航的准确性与实用性。 展开更多
关键词 巡检机器人 自主导航 RGB-D相机 视觉SLAM RTAB-map算法
在线阅读 下载PDF
A New Method of Mosaicking Context Camera (CTX) Images for the Geomorphological Study of Martian Landscape
13
作者 Anil Chavan Subham Sarkar +1 位作者 Adarsh Thakkar Subhash Bhandari 《Open Journal of Geology》 2021年第8期373-380,共8页
Various spacecraft and satellites from the world’s best space agencies are exploring Mars since 1970, constantly with great ability to capture the maximum amount of dataset for a better understanding of the red plane... Various spacecraft and satellites from the world’s best space agencies are exploring Mars since 1970, constantly with great ability to capture the maximum amount of dataset for a better understanding of the red planet. In this paper, we propose a new method for making a mosaic of Mars Reconnaissance Orbiter (MRO) spacecraft payload Context Camera (CTX) images. In this procedure, we used ERDAS Imagine for image rectification and mosaicking as a tool for image processing, which is a new and unique method of generating a mosaic of thousands of CTX images to visualize the large-scale areas. The output product will be applicable for mapping of Martian geomorphological features, 2D mapping of the linear feature with high resolution, crater counting, and morphometric analysis to a certain extent. 展开更多
关键词 Mosaicking ERDAS Imagine Context camera (CTX) Images mapping
在线阅读 下载PDF
Analyzing the Impact of Scene Transitions on Indoor Camera Localization through Scene Change Detection in Real-Time
14
作者 Muhammad S.Alam Farhan B.Mohamed +2 位作者 Ali Selamat Faruk Ahmed AKM B.Hossain 《Intelligent Automation & Soft Computing》 2024年第3期417-436,共20页
Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance o... Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance. 展开更多
关键词 camera pose estimation indoor camera localization real-time localization scene change detection simultaneous localization and mapping(SLAM)
在线阅读 下载PDF
以多数语义物体为主特征的语义地图重定位研究
15
作者 蒋林 明祥宇 +4 位作者 汤勃 万乐 向贤宝 雷斌 郭宇飞 《哈尔滨工程大学学报》 北大核心 2025年第2期363-373,共11页
针对自适应蒙特卡罗定位算法在相似环境和长走廊环境及环境改变后定位不准的问题,本文提出一种以多数语义物体为主特征的语义地图重定位算法进行全局定位。该算法利用构建好的二维栅格语义地图提取语义物体的主特征,结合相机观测模型及... 针对自适应蒙特卡罗定位算法在相似环境和长走廊环境及环境改变后定位不准的问题,本文提出一种以多数语义物体为主特征的语义地图重定位算法进行全局定位。该算法利用构建好的二维栅格语义地图提取语义物体的主特征,结合相机观测模型及主语义物体与周围次语义物体信息表实现全局预定位。通过预定位的结果改进粒子权重更新方式,提高自适应蒙特卡罗定位算法的实时性。结果表明:本文算法在室内相似环境及环境改变后定位速率较自适应蒙特卡罗定位算法分别提升了68.75%和52.78%,在长走廊环境及环境改变后定位速率较自适应蒙特卡罗定位算法分别提升了65.96%和53.13%,通过实验验证了本文算法在粒子收敛速率、鲁棒性、实时性都有提升。 展开更多
关键词 语义地图 主特征 相机 信息表 全局预定位 粒子 自适应蒙特卡罗定位算法 定位速率
在线阅读 下载PDF
基于特征点云配准的SLAM重建深度优化方法 被引量:1
16
作者 曹学伟 袁杰 梁荣光 《计算机工程与设计》 北大核心 2025年第3期657-664,共8页
针对视觉SLAM特征法中深度估计准确性和可靠性偏低的问题,提出一种基于特征点云配准的SLAM重建深度优化方法。利用SHOT特征描述子对三维点云提取特征,使用特征点云配准替代传统视觉特征法进行相机位姿估计。通过设定节点误差函数,优化... 针对视觉SLAM特征法中深度估计准确性和可靠性偏低的问题,提出一种基于特征点云配准的SLAM重建深度优化方法。利用SHOT特征描述子对三维点云提取特征,使用特征点云配准替代传统视觉特征法进行相机位姿估计。通过设定节点误差函数,优化视觉特征法建立的点云及其二义性点,建立带有纹理效果的稠密点云模型。在TUM和ICL-NUIM数据集上进行仿真实验,其结果表明,该方法相对于传统SLAM方法的相机位姿轨迹精度提升了10%。采用Kinect v2型RGB-D相机验证了该方法的有效性,实现了室内场景具有一定纹理效果的模型建立。 展开更多
关键词 点云配准 相机位姿估计 深度优化 稠密建图 同时定位与地图构建 点云地图 点云特征
在线阅读 下载PDF
多传感器融合的无人车SLAM系统研究
17
作者 吴文昊 谷玉海 《重庆理工大学学报(自然科学)》 北大核心 2025年第1期229-235,共7页
为提高无人车的避障能力,使其能够在构建的地图环境中高效地进行自动定位和路径规划,提出一种多传感器融合的无人车SLAM系统。对于障碍物监测,采用激光雷达与深度相机信息融合的方法构建地图,以融合得到更精准的栅格图。搭建了履带式差... 为提高无人车的避障能力,使其能够在构建的地图环境中高效地进行自动定位和路径规划,提出一种多传感器融合的无人车SLAM系统。对于障碍物监测,采用激光雷达与深度相机信息融合的方法构建地图,以融合得到更精准的栅格图。搭建了履带式差速底盘运动学模型,通过融合IMU数据提高位姿估计精度;分析了贝叶斯推理方法,在决策层以该方法有效融合激光雷达与深度相机的数据;提出基于卡尔曼滤波算法动态调整权重将雷达与相机的后验概率融合,得到最终的地图栅格信息。最后,根据融合后的数据构建地图并实现自主导航的功能。通过对比实验发现,改进的多传感器融合建图算法定位精度综合提高了91.67%,实时的整体性能提升了54.46%,栅格建图完整性提升了6.59%。 展开更多
关键词 贝叶斯算法 融合建图 激光雷达 深度相机 ROS2
在线阅读 下载PDF
动态场景下流感知的视觉定位方法
18
作者 王佳慧 来林静 张磊 《计算机辅助设计与图形学学报》 北大核心 2025年第6期961-972,共12页
在动态场景下,视觉同时定位与地图构建(simultaneous localization and mapping, SLAM)通常与深度学习方法结合提高系统的定位精度.针对深度学习方法运行时产生的时间延迟,导致系统难以达到流式处理要求的问题,提出一种面向动态场景下视... 在动态场景下,视觉同时定位与地图构建(simultaneous localization and mapping, SLAM)通常与深度学习方法结合提高系统的定位精度.针对深度学习方法运行时产生的时间延迟,导致系统难以达到流式处理要求的问题,提出一种面向动态场景下视觉SLAM的流感知定位方法.首先针对传统评估指标只考虑定位精度的问题,提出流式评估指标,该指标同时考虑定位精度和时间延迟,能够准确反映系统的流式处理性能;其次针对传统视觉SLAM方法无法实现流式处理的问题,提出流感知的视觉定位方法,通过多线程并行和相机位姿预测相结合的方式,获得持续稳定的相机位姿输出.在BONN数据集和真实场景上的实验结果表明,所提方法能够有效地提升动态场景下采用深度学习方法的视觉定位的流性能.基于BONN数据集和流式评估方式的评估结果表明,与DynaSLAM方法对比,所提方法的绝对轨迹误差(APE),相对平移误差(RPE_trans)和相对旋转误差(RPE_angle)分别下降80.438%, 56.180%和54.676%.在真实场景下的实验结果表明,所提方法可以得到与实际相符的相机轨迹. 展开更多
关键词 同时定位与地图构建 流式处理 流性能 相机位姿预测
在线阅读 下载PDF
基于激光/RGBD相机融合的移动机器人SLAM研究
19
作者 安赫 崔敏 +1 位作者 张鹏 刘鹏 《舰船电子工程》 2025年第12期200-205,共6页
为了解决使用单独的2D激光雷达和RGBD相机在地图构建过程中所面临的环境信息描述不足和建图精度低的问题,该研究提出了一种基于2D激光雷达和RGBD相机信息融合的SLAM算法。首先,通过联合标定获取相机内外参数,并建立激光数据与雷达数据... 为了解决使用单独的2D激光雷达和RGBD相机在地图构建过程中所面临的环境信息描述不足和建图精度低的问题,该研究提出了一种基于2D激光雷达和RGBD相机信息融合的SLAM算法。首先,通过联合标定获取相机内外参数,并建立激光数据与雷达数据之间的转换关系。其次,利用时间插值法在相应的视觉帧下对激光帧进行补全,以实现两种数据的时间同步。最后,引入加权误差联合损失函数来优化建图和定位效果。实验结果表明,在特征单一的长廊环境和复杂室内环境的测试中,与单一SLAM相比,该算法在建图和定位方面都取得了显著的提升效果。相较于激光SLAM,定位精度提升了17%;相较于视觉SLAM,融合精度提升了24%。 展开更多
关键词 2D激光雷达 RGBD相机 信息融合 同步定位与建图(SLAM)
在线阅读 下载PDF
移动机器人视觉同步定位与建图方法
20
作者 朱沛尧 周海波 +2 位作者 张浩宇 卢率 魏仁哲 《天津理工大学学报》 2025年第5期62-69,共8页
同步定位与建图(simultaneous localization and mapping,SLAM)技术能够帮助移动机器人在没有先验信息的条件下,为其提供地图和自身位置信息,已成为移动机器人自主导航的主流解决方案,其中以相机为传感器的视觉SLAM,有着体积小巧、成本... 同步定位与建图(simultaneous localization and mapping,SLAM)技术能够帮助移动机器人在没有先验信息的条件下,为其提供地图和自身位置信息,已成为移动机器人自主导航的主流解决方案,其中以相机为传感器的视觉SLAM,有着体积小巧、成本低、高分辨率等优势。随着研究者们对SLAM问题的深入研究,SLAM领域相关成果已非常丰富,但是有关视觉场景下的SLAM论述还不够系统。文中首先介绍了视觉SLAM的基本原理,之后对于传统视觉SLAM与基于深度学习的视觉SLAM两个方面阐述了视觉SLAM的研究方法,从地图类型以及特点等方面进行对比分析,为移动机器人的视觉SLAM技术研究提供了参考。 展开更多
关键词 移动机器人 同步定位与建图 综述 相机 深度学习
在线阅读 下载PDF
上一页 1 2 19 下一页 到第
使用帮助 返回顶部