期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Exploring 2D projection and 3D spatial information for aircraft 6D pose 被引量:1
1
作者 Daoyong FU Songchen HAN +2 位作者 BinBin LIANG Xinyang YUAN Wei LI 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第8期258-268,共11页
The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance... The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance on the accuracy. To achieve precise 6D pose estimation of the aircraft, an end-to-end method using an RGB image is proposed. In the proposed method, the2D and 3D information of the keypoints of the aircraft is used as the intermediate supervision,and 6D pose information of the aircraft in this intermediate information will be explored. Specifically, an off-the-shelf object detector is utilized to detect the Region of the Interest(Ro I) of the aircraft to eliminate background distractions. The 2D projection and 3D spatial information of the pre-designed keypoints of the aircraft is predicted by the keypoint coordinate estimator(Kp Net).The proposed method is trained in an end-to-end fashion. In addition, to deal with the lack of the related datasets, this paper builds the Aircraft 6D Pose dataset to train and test, which captures the take-off and landing process of three types of aircraft from 11 views. Compared with the latest Wide-Depth-Range method on this dataset, our proposed method improves the average 3D distance of model points metric(ADD) and 5° and 5 m metric by 86.8% and 30.1%, respectively. Furthermore, the proposed method gets 9.30 ms, 61.0% faster than YOLO6D with 23.86 ms. 展开更多
关键词 2D and 3D information 6d pose regression aircraft 6d pose estimation End-to-end network RGB image
原文传递
6D Object Pose Estimation in Cluttered Scenes from RGB Images 被引量:1
2
作者 Xiao-Long Yang Xiao-Hong Jia +1 位作者 Yuan Liang Lu-Bin Fan 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第3期719-730,共12页
We propose a feature-fusion network for pose estimation directly from RGB images without any depth information in this study.First,we introduce a two-stream architecture consisting of segmentation and regression strea... We propose a feature-fusion network for pose estimation directly from RGB images without any depth information in this study.First,we introduce a two-stream architecture consisting of segmentation and regression streams.The segmentation stream processes the spatial embedding features and obtains the corresponding image crop.These features are further coupled with the image crop in the fusion network.Second,we use an efficient perspective-n-point(E-PnP)algorithm in the regression stream to extract robust spatial features between 3D and 2D keypoints.Finally,we perform iterative refinement with an end-to-end mechanism to improve the estimation performance.We conduct experiments on two public datasets of YCB-Video and the challenging Occluded-LineMOD.The results show that our method outperforms state-of-the-art approaches in both the speed and the accuracy. 展开更多
关键词 two-stream network 6d pose estimation fusion feature
原文传递
6D pose annotation and pose estimation method for weak-corner objects under low-light conditions 被引量:1
3
作者 JIANG ZhiHong CHEN JinHong +2 位作者 JING YaMan HUANG Xiao LI Hui 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2023年第3期630-640,共11页
In unstructured environments such as disaster sites and mine tunnels,it is a challenge for robots to estimate the poses of objects under complex lighting backgrounds,which limit their operation.Owing to the shadows pr... In unstructured environments such as disaster sites and mine tunnels,it is a challenge for robots to estimate the poses of objects under complex lighting backgrounds,which limit their operation.Owing to the shadows produced by a point light source,the brightness of the operation scene is seriously unbalanced,and it is difficult to accurately extract the features of objects.It is especially difficult to accurately label the poses of objects with weak corners and textures.This study proposes an automatic pose annotation method for such objects,which combine 3D-2D matching projection and rendering technology to improve the efficiency of dataset annotation.A 6D object pose estimation method under low-light conditions(LP_TGC)is then proposed,including(1)a light preprocessing neural network model based on a low-light preprocessing module(LPM)to balance the brightness of a picture and improve its quality;and(2)a 6D pose estimation model(TGC)based on the keypoint matching.Four typical datasets are constructed to verify our method,the experimental results validated and demonstrated the effectiveness of the proposed LP_TGC method.The estimation model based on the preprocessed image can accurately estimate the pose of the object in the mentioned unstructured environments,and it can improve the accuracy by an average of~3%based on the ADD metric. 展开更多
关键词 6d object pose estimation 6d pose annotation low-light conditions
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部