期刊文献+
共找到234篇文章
< 1 2 12 >
每页显示 20 50 100
Rotation Estimation for Mobile Robot Based on Single-axis Gyroscope and Monocular Camera 被引量:2
1
作者 Yang, Ke-Hu Yu, Wen-Sheng Ji, Xiao-Qiang 《International Journal of Automation and computing》 EI 2012年第3期292-298,共7页
The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is propo... The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is proposed to estimate the rotation matrix by using a single-axis gyroscope and the image points correspondence from a monocular camera. The experimental results show that the precision of mobile robot s yaw angle estimated by the proposed EKF algorithm is much better than the results given by the image-only and gyroscope-only method, which demonstrates that our method is a preferable way to estimate the rotation for the autonomous mobile robot applications. 展开更多
关键词 Rotation matrix estimation QUATERNION extended Kalman filter (EKF) monocular camera gyroscope.
原文传递
Monocular camera and 3D lidar joint calibration 被引量:1
2
作者 Zheng Xin Wu Xiaojun 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2020年第4期91-98,共8页
Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration regi... Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration registration between the monocular camera and the 16-line lidar. The study is divided into two parts: single-sensor independent calibration and multi-sensor joint registration, in which the selected objective world is used. The system associates the lidar coordinates with the camera coordinates. The lidar and the camera are used to obtain the normal vectors of the calibration plate and the point cloud data representing the calibration plate by the appropriate algorithm. Iterated closest points(ICP) is the method used for the iterative refinement of the registration. 展开更多
关键词 calibration registration monocular camera 16-line lidar MULTI-SENSOR ICP
原文传递
AstroPose:Astronaut pose estimation using a monocular camera during extravehicular activities 被引量:1
3
作者 LIU ZiBin LI You +4 位作者 WANG ChunHui LIU Liang GUAN BangLei SHANG Yang YU QiFeng 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第6期1933-1945,共13页
With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To... With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts. 展开更多
关键词 monocular camera astronaut pose estimation camera calibration
原文传递
Monocular visual estimation for autonomous aircraft landing guidance in unknown structured scenes
4
作者 Zhuo ZHANG Quanrui CHEN +2 位作者 Qiufu WANG Xiaoliang SUN Qifeng YU 《Chinese Journal of Aeronautics》 2025年第9期365-382,共18页
The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative po... The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes. 展开更多
关键词 Automatic landing Image processing monocular camera Pose measurement Unknown structured scene
原文传递
基于LM算法的三维点云与二维图像标定方法
5
作者 吴龙 陶奕帆 +2 位作者 杨旭 徐璐 陈淑玉 《现代电子技术》 北大核心 2026年第1期59-65,共7页
针对激光雷达与相机检测时标定精度不足,导致后续激光雷达点云与相机图像的空间对齐产生误差,影响后续特征匹配、物体检测和三维重建准确性的问题,文中提出一种基于激光雷达三维点云和单目相机的二维图像的标定方法,旨在实现对大规模物... 针对激光雷达与相机检测时标定精度不足,导致后续激光雷达点云与相机图像的空间对齐产生误差,影响后续特征匹配、物体检测和三维重建准确性的问题,文中提出一种基于激光雷达三维点云和单目相机的二维图像的标定方法,旨在实现对大规模物体的精确检测和三维环境重建。该方法首先通过多帧点云数据叠加获得相对密集的点云测量,并利用角点检测算法检测图像中的特征角点;随后使用偏最小二乘法(PLS)对参数进行求解;最后利用LM迭代算法最小化重投影误差,提高标定精度。标定结果表明,SPAAM算法相较于经典方法重投影误差减少8.6%,所提方法相较于经典方法重投影误差减少近38.2%,验证了所提方法的准确性和有效性。 展开更多
关键词 激光雷达 单目相机 标定方法 点云数据 偏最小二乘法 LM迭代算法
在线阅读 下载PDF
面向自动驾驶的多尺度目标三维检测算法
6
作者 刘嫚 陈晓楠 《现代电子技术》 北大核心 2026年第1期141-147,共7页
在自动驾驶场景中,使用单目相机进行三维目标检测是一项具有挑战性的任务,尤其是在复杂道路环境下,目标的尺度差异和遮挡现象容易导致误检或漏检。针对这一问题,文中提出一种基于特征融合与增强的单目三维目标检测算法。首先,构建Faster... 在自动驾驶场景中,使用单目相机进行三维目标检测是一项具有挑战性的任务,尤其是在复杂道路环境下,目标的尺度差异和遮挡现象容易导致误检或漏检。针对这一问题,文中提出一种基于特征融合与增强的单目三维目标检测算法。首先,构建FasterNet+作为骨干网络,通过优化嵌入层和块结构,增强细节信息的提取,提升网络的整体性能;其次,设计多维特征自适应融合模块,自适应地选择并融合高维与低维特征,解决高维特征丢失小目标信息和低维特征缺乏上下文信息的问题;最后,引入特征增强注意力模块,突出特定目标区域,进一步提升网络在目标定位和分类方面的精度。在nuScenes数据集上的实验结果表明,其mAP和NDS比基准方法分别提高0.038和0.035,可以有效检测出不同类型和尺度的目标,并展现出更强的鲁棒性,为自动驾驶场景中的多维目标检测提供了一种新思路。 展开更多
关键词 自动驾驶 单目相机 三维目标检测 多尺度感知 特征融合 注意力机制 机器视觉
在线阅读 下载PDF
High-precision Calibration of Camera and IMU on Manipulator for Bio-inspired Robotic System 被引量:1
7
作者 Yinlong Zhang Wei Liang +4 位作者 Sichao Zhang Xudong Yuan Xiaofang Xia Jindong Tan Zhibo Pang 《Journal of Bionic Engineering》 SCIE EI CSCD 2022年第2期299-313,共15页
Inspired by box jellyfish that has distributed and complementary perceptive system,we seek to equip manipulator with a camera and an Inertial Measurement Unit(IMU)to perceive ego motion and surrounding unstructured en... Inspired by box jellyfish that has distributed and complementary perceptive system,we seek to equip manipulator with a camera and an Inertial Measurement Unit(IMU)to perceive ego motion and surrounding unstructured environment.Before robot perception,a reliable and high-precision calibration between camera,IMU and manipulator is a critical prerequisite.This paper introduces a novel calibration system.First,we seek to correlate the spatial relationship between the sensing units and manipulator in a joint framework.Second,the manipulator moving trajectory is elaborately designed in a spiral pattern that enables full excitations on yaw-pitch-roll rotations and x-y-z translations in a repeatable and consistent manner.The calibration has been evaluated on our collected visual inertial-manipulator dataset.The systematic comparisons and analysis indicate the consistency,precision and effectiveness of our proposed calibration method. 展开更多
关键词 Bio-inspired robotic system monocular camera IMU MANIPULATOR CALIBRATION
在线阅读 下载PDF
大型回转体部件对接位姿测量方法
8
作者 辛成龙 周江 +1 位作者 宫久路 王泽鹏 《探测与控制学报》 北大核心 2026年第1期107-114,共8页
针对大型回转体部件自动对接过程中,位姿参数难以测量的问题,提出一种用于大型回转体部件对接的位姿测量方法。该方法以测量对象几何特性、测量需求、测量场景为约束,设计视觉测量系统;基于成像特征的几何特性,设计了一种两阶段的特征... 针对大型回转体部件自动对接过程中,位姿参数难以测量的问题,提出一种用于大型回转体部件对接的位姿测量方法。该方法以测量对象几何特性、测量需求、测量场景为约束,设计视觉测量系统;基于成像特征的几何特性,设计了一种两阶段的特征提取算法,解决了特征检测速度慢、精度低的问题;在此基础上,基于多相机位姿约束和空间圆成像特性,提出一种基于特征补全的位姿估计算法,实现了位姿参数的准确测量。实验结果表明,目标的位置参数测量误差均值小于2 mm,姿态参数测量误差均值小于0.05°,算法具有较好的测量精度和鲁棒性,可以满足自动对接的需求。 展开更多
关键词 大型回转体部件 多相机约束 单目视觉 位姿测量
在线阅读 下载PDF
Monocular Visual-Inertial and Robotic-Arm Calibration in a Unifying Framework
9
作者 Yinlong Zhang Wei Liang +3 位作者 Mingze Yuan Hongsheng He Jindong Tan Zhibo Pang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期146-159,共14页
Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditiona... Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditional calibrations suffer inaccuracy and inconsistency.To address these problems,this paper proposes a monocular visual-inertial and robotic-arm calibration in a unifying framework.In our method,the spatial relationship is geometrically correlated between the sensing units and robotic arm.The decoupled estimations on rotation and translation could reduce the coupled errors during the optimization.Additionally,the robotic calibration moving trajectory has been designed in a spiral pattern that enables full excitations on 6 DOF motions repeatably and consistently.The calibration has been evaluated on our developed platform.In the experiments,the calibration achieves the accuracy with rotation and translation RMSEs less than 0.7°and 0.01 m,respectively.The comparisons with state-of-the-art results prove our calibration consistency,accuracy and effectiveness. 展开更多
关键词 CALIBRATION inertial measurement unit(IMU) monocular camera robotic arm spiral moving trajectory
在线阅读 下载PDF
Robust and Accurate Monocular Visual Navigation Combining IMU for a Quadrotor 被引量:9
10
作者 Wei Zheng Fan Zhou Zengfu Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2015年第1期33-44,共12页
In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement uni... In this paper, we present a multi-sensor fusion based monocular visual navigation system for a quadrotor with limited payload, power and computational resources. Our system is equipped with an inertial measurement unit (IMU), a sonar and a monocular down-looking camera. It is able to work well in GPS-denied and markerless environments. Different from most of the keyframe-based visual navigation systems, our system uses the information from both keyframes and keypoints in each frame. The GPU-based speeded up robust feature (SURF) is employed for feature detection and feature matching. Based on the flight characteristics of quadrotor, we propose a refined preliminary motion estimation algorithm combining IMU data. A multi-level judgment rule is then presented which is beneficial to hovering conditions and reduces the error accumulation effectively. By using the sonar sensor, the metric scale estimation problem has been solved. We also present the novel IMU+3P (IMU with three point correspondences) algorithm for accurate pose estimation. This algorithm transforms the 6-DOF pose estimation problem into a 4-DOF problem and can obtain more accurate results with less computation time. We perform the experiments of monocular visual navigation system in real indoor and outdoor environments. The results demonstrate that the monocular visual navigation system performing in real-time has robust and accurate navigation results of the quadrotor. © 2014 Chinese Association of Automation. 展开更多
关键词 Aircraft cameras Motion estimation Navigation systems SONAR Units of measurement
在线阅读 下载PDF
Velocity Calculation by Automatic Camera Calibration Based on Homogenous Fog Weather Condition 被引量:4
11
作者 Hong-Jun Song Yang-Zhou Chen Yuan-Yuan Gao 《International Journal of Automation and computing》 EI CSCD 2013年第2期143-156,共14页
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middl... A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm. 展开更多
关键词 Vehicle velocity calculation homogenous fog weather condition dark channel prior monocular camera calibration
原文传递
基于单目相机的仓储托盘视觉检测研究
12
作者 张彦 曹磊 +1 位作者 肖献强 王家恩 《机械设计与制造》 北大核心 2025年第12期352-356,共5页
针对托盘搬运机器人(Automatic Guided Vehicle,AGV)在复杂环境下对仓储托盘存在错误感知、过大的定位误差等问题,提出了基于单目相机的仓储托盘视觉检测方法。用相机实时采集场景图像,以轻量级SSD神经网络进行全局图像识别,分割出相机... 针对托盘搬运机器人(Automatic Guided Vehicle,AGV)在复杂环境下对仓储托盘存在错误感知、过大的定位误差等问题,提出了基于单目相机的仓储托盘视觉检测方法。用相机实时采集场景图像,以轻量级SSD神经网络进行全局图像识别,分割出相机视野下目标托盘的感性区域,在感性区域内进行图像处理、线段拟合,并在此基础上,设计了仓储托盘的特征提取算法,通过相机内外参标定、坐标系转换获取世界坐标系下托盘的三维位姿。测试结果表明仓储托盘的视觉检测具有较高的识别率和精度,有效检测率达到90%,定位精度达到横向9mm、纵向10mm,为AGV在智能仓储及无人化工厂中对托盘的视觉检测研究提供了技术支撑。 展开更多
关键词 单目相机 神经网络 视觉检测 特征提取
在线阅读 下载PDF
ELDE-Net:Efficient Light-Weight Depth Estimation Network for Deep Reinforcement Learning-Based Mobile Robot Path Planning
13
作者 Thai-Viet Dang Dinh-Manh-Cuong Tran +1 位作者 Nhu-Nghia Bui Phan Xuan Tan 《Computers, Materials & Continua》 2025年第11期2651-2680,共30页
Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional obje... Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional object detection(2DOD)frameworks to predict the three-dimensional bounding box(3DBB)of objects captured in 2D RGB images.However,these methods often require multiple images,making them less feasible for various real-time scenarios.To address these challenges,the emergence of agile convolutional neural networks(CNNs)capable of inferring depth froma single image opens a new avenue for investigation.The paper proposes a novel ELDENet network designed to produce cost-effective 3DBounding Box Estimation(3D-BBE)froma single image.This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder.Additionally,this integration includes a Squeeze-Exploit(SE)module utilizing the Math Kernel Library for Deep Neural Networks(MKLDNN)optimizer to enhance convolutional efficiency and streamline model size during effective training.Meanwhile,the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure.Furthermore,the generated depthmaps provide a clear perspective with distance details of objects in the environment.These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes(3DBB),facilitating scene understanding and optimal route planning for mobile robots.Based on the estimated object center of the 3DBB,the Deep Reinforcement Learning(DRL)-based obstacle avoidance strategy for MRs is developed.Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets:NYU-V2,KITTI,and Cityscapes.Overall,this framework shows significant potential for adaptation in intelligent mechatronic systems,particularly in developing knowledge-driven systems for mobile robot navigation. 展开更多
关键词 3D bounding box estimation depth estimation mobile robot navigation monocular camera object detection
在线阅读 下载PDF
基于多传感器融合的无人车目标检测系统研究 被引量:1
14
作者 陈晓锋 李郁峰 +3 位作者 王传松 郭荣 樊宏丽 朱堉伦 《激光杂志》 北大核心 2025年第5期94-100,共7页
针对单一传感器存在受环境因素影响较大,容易造成漏检,误检且不同传感器之间的数据格式不同,融合复杂度高的问题,提出一种基于激光雷达和相机的决策级融合方法。首先对激光雷达和相机进行时空对齐,然后分别使用PointPillars算法和Yolov... 针对单一传感器存在受环境因素影响较大,容易造成漏检,误检且不同传感器之间的数据格式不同,融合复杂度高的问题,提出一种基于激光雷达和相机的决策级融合方法。首先对激光雷达和相机进行时空对齐,然后分别使用PointPillars算法和Yolov5算法对预处理后的点云数据和图像数据进行迁移训练和目标检测得到检测框,最后使用交并比匹配、D-S证据理论和加权框融合方法对目标结果进行融合。通过实车试验,得出提出的融合方法在激光雷达和相机的决策级融合场景中能够有效结合两者的优势,实现对环境的更全面感知,有效提升目标检测精度,减小误检,漏检的概率。 展开更多
关键词 多传感器融合 激光雷达 单目相机 D-S证据理论 加权框融合
原文传递
基于边缘设备的快速单目深度估计算法研究 被引量:1
15
作者 王文帅 韩军 +2 位作者 邹小燕 倪源松 胡广怡 《计算机测量与控制》 2025年第4期262-269,共8页
单目深度估算采用单一相机,安装方便,在机器人、无人机领域有广泛的应用;由于单目深度估计算法采用基于编码-解码的复杂的深度神经网络结构会导致边缘设备实时推理效率较低的问题,进而提出了一种可以在边缘设备上实时深度估计的网络架构... 单目深度估算采用单一相机,安装方便,在机器人、无人机领域有广泛的应用;由于单目深度估计算法采用基于编码-解码的复杂的深度神经网络结构会导致边缘设备实时推理效率较低的问题,进而提出了一种可以在边缘设备上实时深度估计的网络架构;该架构采用倒置残差块设计的编码端,采用残差深度可分离卷积与最近邻插值重新设计的解码端,大大减少了模型的参数和计算量,并通过跨层连接将编码网络的特征与解码网络的特征相融合增强深度图中物体的边缘细节信息;实验结果表明,提出的网络架构参数量减少了82%,计算量减少了92%,在KITTI数据集上达到了先进的性能,并且在Jetson TX2上推理速度达到了50 FPS。 展开更多
关键词 深度感知 单目相机 边缘设备 倒置残差 神经网络
在线阅读 下载PDF
面向野外环境的视觉惯导SLAM算法
16
作者 李静博 伊克萨尼·普尔凯提 +2 位作者 朱斌 朱纪洪 艾斯卡尔·艾木都拉 《计算机工程与设计》 北大核心 2025年第4期1005-1012,共8页
过去的定位研究大多是在结构化环境下进行研究的,但在野外环境下由于光照变化大、特征提取困难和颠簸道路多导致定位困难。提出一个由单目相机和惯性测量单元组成的可视化视觉惯导里程计,前端通过融合基于泊松方程预处理的视觉信息和惯... 过去的定位研究大多是在结构化环境下进行研究的,但在野外环境下由于光照变化大、特征提取困难和颠簸道路多导致定位困难。提出一个由单目相机和惯性测量单元组成的可视化视觉惯导里程计,前端通过融合基于泊松方程预处理的视觉信息和惯性测量单元信息,后端采用非线性优化并结合鲁棒核函数使系统能够在光照变化大和颠簸道路的野外环境下鲁棒运行。实验结果表明,算法在ROOAD野外环境数据集中的表现比最先进的视觉惯导方法精度更高,鲁棒性更强,在EuRoC结构化数据集中的表现也超越现阶段最先进的算法。 展开更多
关键词 同时定位与建图 定位 单目相机 惯性测量单元 野外环境 自动驾驶 环境感知
在线阅读 下载PDF
基于单相机的空间目标相对位姿测量系统
17
作者 支帅 丁国鹏 +2 位作者 韩世豪 张永合 朱振才 《中国光学(中英文)》 北大核心 2025年第5期1111-1123,共13页
为提高测量系统的稳定性及精度,实现航天器超近距离高精度对接,本文提出了一种基于单相机及合作靶标的相对位姿测量系统,用于双星间相对位置及姿态的高精度测量。通过设计追踪星视觉相机及目标星LED合作靶标,在双星距离为50米到0.4米的... 为提高测量系统的稳定性及精度,实现航天器超近距离高精度对接,本文提出了一种基于单相机及合作靶标的相对位姿测量系统,用于双星间相对位置及姿态的高精度测量。通过设计追踪星视觉相机及目标星LED合作靶标,在双星距离为50米到0.4米的范围内,实现了高精度的相对位姿测量。通过设计远近场LED靶标,实现了相机与靶标间的协同工作,保证在50米到0.4米的距离均能清晰成像;根据设计的靶标特性,提出了多尺度质心提取算法,利用斜率一致性约束与间距比筛选,在复杂光照下能稳定获取特征目标;最后,结合靶标几何约束的初值估计,实现了目标星相对于追踪星的位姿解算,为进一步提高测量精度,引入非线性优化方法对位姿结果进行迭代优化,有效降低了测量误差。试验结果表明,系统测量精度由远及近逐渐提高,在距离为0.4m时,位置测量精度优于1mm,姿态测量精度优于0.2°,满足超近距离对接任务需求。本方案为空间在轨目标相对位姿测量提供了高精度、高稳定性的技术支撑,具有重要的工程应用价值。 展开更多
关键词 单相机 LED合作靶标 多尺度质心提取 非线性优化 相对位姿测量
在线阅读 下载PDF
基于深度估计SVO与IMU融合的定位算法研究
18
作者 李德航 袁宇鹏 +5 位作者 张楠 廖崧琳 王露 向路 陈凤 喻芳菲 《压电与声光》 北大核心 2025年第4期776-782,共7页
针对单一传感器的局限以及视觉惯性融合算法计算复杂度等问题,提出一种基于深度估计半直接视觉里程计(SVO)与惯性测量单元(IMU)融合的定位算法。将深度估计模块集成至SVO中,采用扩展卡尔曼滤波器构建松耦合融合框架,融合视觉位姿信息与... 针对单一传感器的局限以及视觉惯性融合算法计算复杂度等问题,提出一种基于深度估计半直接视觉里程计(SVO)与惯性测量单元(IMU)融合的定位算法。将深度估计模块集成至SVO中,采用扩展卡尔曼滤波器构建松耦合融合框架,融合视觉位姿信息与IMU加速度、角速度数据,实现状态估计与位姿校正。在KITTI数据集以及真实环境中验证表明,该算法的绝对轨迹误差低于纯视觉算法,且在复杂场景下预测轨迹与真实轨迹高度贴合。 展开更多
关键词 单目相机 惯性导航 多传感器融合定位 半直接视觉里程计(SVO) 扩展卡尔曼滤波器(EKF) 视觉惯性融合
在线阅读 下载PDF
单目视觉在仿生蜘蛛中的识别与测距应用 被引量:1
19
作者 朱旭 黄明 《机械研究与应用》 2025年第5期85-88,93,共5页
随着仿生机器人的迅速发展及应用,关于目标信息反馈的测量方法引起了广泛关注。目前,大多数目标识别和测距方法依赖于在仿生机器人上安装雷达或双目相机等辅助装置。然而,受限于仿生机器人的资源,需要开发更轻量化、低成本且易于部署的... 随着仿生机器人的迅速发展及应用,关于目标信息反馈的测量方法引起了广泛关注。目前,大多数目标识别和测距方法依赖于在仿生机器人上安装雷达或双目相机等辅助装置。然而,受限于仿生机器人的资源,需要开发更轻量化、低成本且易于部署的解决方案。因此,提出了一种基于仿生蜘蛛姿态调节能力与单目相机相结合的目标检测与测距方案。该方案首先利用YOLOv5模型识别图像中的目标物体;随后,通过MATLAB对相机进行标定,获取内外参数,并对图像进行预处理、立体匹配与目标深度计算;最终得到目标物体相对于仿生蜘蛛左姿态下相机的距离。实验结果验证了该方案在目标检测与测距任务中的有效性。 展开更多
关键词 仿生机器人 信息反馈 YOLO 单目相机
在线阅读 下载PDF
基于单目场景的车辆三维目标检测方法综述
20
作者 唐心瑶 王伟 《现代信息科技》 2025年第9期16-24,31,共10页
近年来,车辆三维目标检测在自动驾驶等智能交通领域备受关注。相较于二维检测,三维检测可精确估计目标在三维空间中的位置、尺寸与姿态等信息。因单目相机具备成本低、数据处理效率高等优势,在实际应用中占据主导地位。文章聚焦单目场... 近年来,车辆三维目标检测在自动驾驶等智能交通领域备受关注。相较于二维检测,三维检测可精确估计目标在三维空间中的位置、尺寸与姿态等信息。因单目相机具备成本低、数据处理效率高等优势,在实际应用中占据主导地位。文章聚焦单目场景下的三维目标检测方法,系统梳理其发展脉络。首先,依据先验信息来源,将检测方法分为基于几何信息、二维目标检测与几何信息约束、三维特征估计三类,并分别剖析了每类方法中代表性算法的核心思想及其优缺点。其次,介绍领域内常用的公共数据集与评价指标,并在KITTI数据集上对典型算法的实验结果进行量化对比。最后,结合当前研究现状,剖析该领域现存的主要问题,并展望未来发展趋势。 展开更多
关键词 智能交通 单目相机 三维目标检测 深度学习
在线阅读 下载PDF
上一页 1 2 12 下一页 到第
使用帮助 返回顶部