The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is propo...The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is proposed to estimate the rotation matrix by using a single-axis gyroscope and the image points correspondence from a monocular camera. The experimental results show that the precision of mobile robot s yaw angle estimated by the proposed EKF algorithm is much better than the results given by the image-only and gyroscope-only method, which demonstrates that our method is a preferable way to estimate the rotation for the autonomous mobile robot applications.展开更多
With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To...With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.展开更多
The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative po...The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.展开更多
Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration regi...Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration registration between the monocular camera and the 16-line lidar. The study is divided into two parts: single-sensor independent calibration and multi-sensor joint registration, in which the selected objective world is used. The system associates the lidar coordinates with the camera coordinates. The lidar and the camera are used to obtain the normal vectors of the calibration plate and the point cloud data representing the calibration plate by the appropriate algorithm. Iterated closest points(ICP) is the method used for the iterative refinement of the registration.展开更多
Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional obje...Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional object detection(2DOD)frameworks to predict the three-dimensional bounding box(3DBB)of objects captured in 2D RGB images.However,these methods often require multiple images,making them less feasible for various real-time scenarios.To address these challenges,the emergence of agile convolutional neural networks(CNNs)capable of inferring depth froma single image opens a new avenue for investigation.The paper proposes a novel ELDENet network designed to produce cost-effective 3DBounding Box Estimation(3D-BBE)froma single image.This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder.Additionally,this integration includes a Squeeze-Exploit(SE)module utilizing the Math Kernel Library for Deep Neural Networks(MKLDNN)optimizer to enhance convolutional efficiency and streamline model size during effective training.Meanwhile,the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure.Furthermore,the generated depthmaps provide a clear perspective with distance details of objects in the environment.These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes(3DBB),facilitating scene understanding and optimal route planning for mobile robots.Based on the estimated object center of the 3DBB,the Deep Reinforcement Learning(DRL)-based obstacle avoidance strategy for MRs is developed.Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets:NYU-V2,KITTI,and Cityscapes.Overall,this framework shows significant potential for adaptation in intelligent mechatronic systems,particularly in developing knowledge-driven systems for mobile robot navigation.展开更多
Inspired by box jellyfish that has distributed and complementary perceptive system,we seek to equip manipulator with a camera and an Inertial Measurement Unit(IMU)to perceive ego motion and surrounding unstructured en...Inspired by box jellyfish that has distributed and complementary perceptive system,we seek to equip manipulator with a camera and an Inertial Measurement Unit(IMU)to perceive ego motion and surrounding unstructured environment.Before robot perception,a reliable and high-precision calibration between camera,IMU and manipulator is a critical prerequisite.This paper introduces a novel calibration system.First,we seek to correlate the spatial relationship between the sensing units and manipulator in a joint framework.Second,the manipulator moving trajectory is elaborately designed in a spiral pattern that enables full excitations on yaw-pitch-roll rotations and x-y-z translations in a repeatable and consistent manner.The calibration has been evaluated on our collected visual inertial-manipulator dataset.The systematic comparisons and analysis indicate the consistency,precision and effectiveness of our proposed calibration method.展开更多
对于机器人特别是并联机器人替代传统数控机床进行加工是当前的主流趋势,从而对机器人的定位精度提出了更高的要求,误差补偿及运动学标定可以有效提高并联机器人末端定位精度。以新型2-R(Ps)&P(Ps)三平动自由度并联机器人为研究对象...对于机器人特别是并联机器人替代传统数控机床进行加工是当前的主流趋势,从而对机器人的定位精度提出了更高的要求,误差补偿及运动学标定可以有效提高并联机器人末端定位精度。以新型2-R(Ps)&P(Ps)三平动自由度并联机器人为研究对象,提出了基于单目视觉的运动学标定方法,从而提高此类机器人末端定位精度。基于误差闭环矢量链法构建该机构的几何误差模型,得到影响动平台末端位姿的34项几何误差源,采用Sobol算法对其进行误差灵敏度分析,找出对末端误差影响较大的误差源。采用单目相机视觉标定的方法来获取末端位姿,该方法采用Eye in Hand的标定形式,通过视觉图像算法来获取标定板中靶点位置信息进行误差测量,再构建误差辨识方程,利用最小二乘法进行辨识,最后通过修正控制系统输入的方式完成误差补偿流程,进行运动学标定试验。通过该试验,标定前后误差值Δr′均值平均下降77.16%,最大值平均下降69.46%。标定试验结果表明,所提出的运动学标定方法具有一定的有效性,该标定方法适用于同类并联机器人误差标定。展开更多
Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditiona...Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditional calibrations suffer inaccuracy and inconsistency.To address these problems,this paper proposes a monocular visual-inertial and robotic-arm calibration in a unifying framework.In our method,the spatial relationship is geometrically correlated between the sensing units and robotic arm.The decoupled estimations on rotation and translation could reduce the coupled errors during the optimization.Additionally,the robotic calibration moving trajectory has been designed in a spiral pattern that enables full excitations on 6 DOF motions repeatably and consistently.The calibration has been evaluated on our developed platform.In the experiments,the calibration achieves the accuracy with rotation and translation RMSEs less than 0.7°and 0.01 m,respectively.The comparisons with state-of-the-art results prove our calibration consistency,accuracy and effectiveness.展开更多
基金supported by National Natural Science Foundation of China (Nos. 60874010 and 61070048)Innovation Program of Shanghai Municipal Education Commission (No. 11ZZ37)+1 种基金Fundamental Research Funds for the Central Universities (No. 009QJ12)Collaborative Construction Project of Beijing Municipal Commission of Education
文摘The rotation matrix estimation problem is a keypoint for mobile robot localization, navigation, and control. Based on the quaternion theory and the epipolar geometry, an extended Kalman filter (EKF) algorithm is proposed to estimate the rotation matrix by using a single-axis gyroscope and the image points correspondence from a monocular camera. The experimental results show that the precision of mobile robot s yaw angle estimated by the proposed EKF algorithm is much better than the results given by the image-only and gyroscope-only method, which demonstrates that our method is a preferable way to estimate the rotation for the autonomous mobile robot applications.
基金supported by Hunan Provincial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the Science Foundation(Grant No.KY0505072204)+1 种基金the Foundation of National Key Laboratory of Human Factors Engineering(Grant Nos.GJSD22006,6142222210401)the Foundation of China Astronaut Research and Training Center(Grant No.2022SY54B0605)。
文摘With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.
基金co-supported by the Science and Technology Innovation Program of Hunan Province,China(No.2023RC3023)the National Natural Science Foundation of China(No.12272404)。
文摘The autonomous landing guidance of fixed-wing aircraft in unknown structured scenes presents a substantial technological challenge,particularly regarding the effectiveness of solutions for monocular visual relative pose estimation.This study proposes a novel airborne monocular visual estimation method based on structured scene features to address this challenge.First,a multitask neural network model is established for segmentation,depth estimation,and slope estimation on monocular images.And a monocular image comprehensive three-dimensional information metric is designed,encompassing length,span,flatness,and slope information.Subsequently,structured edge features are leveraged to filter candidate landing regions adaptively.By leveraging the three-dimensional information metric,the optimal landing region is accurately and efficiently identified.Finally,sparse two-dimensional key point is used to parameterize the optimal landing region for the first time and a high-precision relative pose estimation is achieved.Additional measurement information is introduced to provide the autonomous landing guidance information between the aircraft and the optimal landing region.Experimental results obtained from both synthetic and real data demonstrate the effectiveness of the proposed method in monocular pose estimation for autonomous aircraft landing guidance in unknown structured scenes.
文摘Joint calibration of sensors is an important prerequisite in intelligent driving scene retrieval and recognition. A simple and efficient solution is proposed for solving the problem of automatic joint calibration registration between the monocular camera and the 16-line lidar. The study is divided into two parts: single-sensor independent calibration and multi-sensor joint registration, in which the selected objective world is used. The system associates the lidar coordinates with the camera coordinates. The lidar and the camera are used to obtain the normal vectors of the calibration plate and the point cloud data representing the calibration plate by the appropriate algorithm. Iterated closest points(ICP) is the method used for the iterative refinement of the registration.
文摘Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional object detection(2DOD)frameworks to predict the three-dimensional bounding box(3DBB)of objects captured in 2D RGB images.However,these methods often require multiple images,making them less feasible for various real-time scenarios.To address these challenges,the emergence of agile convolutional neural networks(CNNs)capable of inferring depth froma single image opens a new avenue for investigation.The paper proposes a novel ELDENet network designed to produce cost-effective 3DBounding Box Estimation(3D-BBE)froma single image.This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder.Additionally,this integration includes a Squeeze-Exploit(SE)module utilizing the Math Kernel Library for Deep Neural Networks(MKLDNN)optimizer to enhance convolutional efficiency and streamline model size during effective training.Meanwhile,the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure.Furthermore,the generated depthmaps provide a clear perspective with distance details of objects in the environment.These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes(3DBB),facilitating scene understanding and optimal route planning for mobile robots.Based on the estimated object center of the 3DBB,the Deep Reinforcement Learning(DRL)-based obstacle avoidance strategy for MRs is developed.Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets:NYU-V2,KITTI,and Cityscapes.Overall,this framework shows significant potential for adaptation in intelligent mechatronic systems,particularly in developing knowledge-driven systems for mobile robot navigation.
基金supported by the National Natural Science Foundation of China(61903357,61902299,62022088)the International Partnership Program of Chinese Academy of Sciences(173321KYSB20200002)+2 种基金Liaoning Provincial Natural Science Foundation of China(2020-MS-032,2021JH6/10500114,2020JH2/10500002)Guangzhou Science and Technology Planning Project(202102021300)China Postdoctoral Science Foundation(2019TQ0239,2019M663636).
文摘Inspired by box jellyfish that has distributed and complementary perceptive system,we seek to equip manipulator with a camera and an Inertial Measurement Unit(IMU)to perceive ego motion and surrounding unstructured environment.Before robot perception,a reliable and high-precision calibration between camera,IMU and manipulator is a critical prerequisite.This paper introduces a novel calibration system.First,we seek to correlate the spatial relationship between the sensing units and manipulator in a joint framework.Second,the manipulator moving trajectory is elaborately designed in a spiral pattern that enables full excitations on yaw-pitch-roll rotations and x-y-z translations in a repeatable and consistent manner.The calibration has been evaluated on our collected visual inertial-manipulator dataset.The systematic comparisons and analysis indicate the consistency,precision and effectiveness of our proposed calibration method.
文摘对于机器人特别是并联机器人替代传统数控机床进行加工是当前的主流趋势,从而对机器人的定位精度提出了更高的要求,误差补偿及运动学标定可以有效提高并联机器人末端定位精度。以新型2-R(Ps)&P(Ps)三平动自由度并联机器人为研究对象,提出了基于单目视觉的运动学标定方法,从而提高此类机器人末端定位精度。基于误差闭环矢量链法构建该机构的几何误差模型,得到影响动平台末端位姿的34项几何误差源,采用Sobol算法对其进行误差灵敏度分析,找出对末端误差影响较大的误差源。采用单目相机视觉标定的方法来获取末端位姿,该方法采用Eye in Hand的标定形式,通过视觉图像算法来获取标定板中靶点位置信息进行误差测量,再构建误差辨识方程,利用最小二乘法进行辨识,最后通过修正控制系统输入的方式完成误差补偿流程,进行运动学标定试验。通过该试验,标定前后误差值Δr′均值平均下降77.16%,最大值平均下降69.46%。标定试验结果表明,所提出的运动学标定方法具有一定的有效性,该标定方法适用于同类并联机器人误差标定。
基金This work was supported by the International Partnership Program of Chinese Academy of Sciences(173321KYSB20180020,173321KYSB20200002)the National Natural Science Foundation of China(61903357,62022088)+3 种基金Liaoning Provincial Natural Science Foundation of China(2020-MS-032,2019-YQ-09,2020JH2/10500002,2021JH6/10500114)LiaoNing Revitalization Talents Program(XLYC1902110)China Postdoctoral Science Foundation(2020M672600)the Swedish Foundation for Strategic Research(APR20-0023).
文摘Reliable and accurate calibration for camera,inertial measurement unit(IMU)and robot is a critical prerequisite for visual-inertial based robot pose estimation and surrounding environment perception.However,traditional calibrations suffer inaccuracy and inconsistency.To address these problems,this paper proposes a monocular visual-inertial and robotic-arm calibration in a unifying framework.In our method,the spatial relationship is geometrically correlated between the sensing units and robotic arm.The decoupled estimations on rotation and translation could reduce the coupled errors during the optimization.Additionally,the robotic calibration moving trajectory has been designed in a spiral pattern that enables full excitations on 6 DOF motions repeatably and consistently.The calibration has been evaluated on our developed platform.In the experiments,the calibration achieves the accuracy with rotation and translation RMSEs less than 0.7°and 0.01 m,respectively.The comparisons with state-of-the-art results prove our calibration consistency,accuracy and effectiveness.