The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) g...The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.展开更多
Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest i...Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.展开更多
In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complem...In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.展开更多
针对点云配准过程中,下采样时容易丢失关键点、影响配准精度的问题,本文提出一种基于特征融合和网络采样的配准方法,提高了配准的精度和速度。在PointNet分类网络基础上,引入小型注意力机制,设计一种基于深度学习网络的关键点提取方法,...针对点云配准过程中,下采样时容易丢失关键点、影响配准精度的问题,本文提出一种基于特征融合和网络采样的配准方法,提高了配准的精度和速度。在PointNet分类网络基础上,引入小型注意力机制,设计一种基于深度学习网络的关键点提取方法,将局部特征和全局特征融合,得到混合特征的特征矩阵。通过深度学习实现对应矩阵求解中相关参数的自动优化,最后利用加权奇异值分解(singular value decomposition,SVD)得到变换矩阵,完成配准。在ModelNet40数据集上的实验表明,和最远点采样相比,所提算法耗时减少45.36%;而配准结果和基于特征学习的鲁棒点匹配(robust point matching using learned features,RPM-Net)相比,平移矩阵均方误差降低5.67%,旋转矩阵均方误差降低13.1%。在自制点云数据上的实验,证实了算法在真实物体上配准的有效性。展开更多
基金the National Natural Science Foundation of China (61720106012 and 61403215)the Foundation of State Key Laboratory of Robotics (2006-003)the Fundamental Research Funds for the Central Universities for the financial support of this work.
文摘The autonomous exploration and mapping of an unknown environment is useful in a wide range of applications and thus holds great significance. Existing methods mostly use range sensors to generate twodimensional (2D) grid maps. Red/green/blue-depth (RGB-D) sensors provide both color and depth information on the environment, thereby enabling the generation of a three-dimensional (3D) point cloud map that is intuitive for human perception. In this paper, we present a systematic approach with dual RGB-D sensors to achieve the autonomous exploration and mapping of an unknown indoor environment. With the synchronized and processed RGB-D data, location points were generated and a 3D point cloud map and 2D grid map were incrementally built. Next, the exploration was modeled as a partially observable Markov decision process. Partial map simulation and global frontier search methods were combined for autonomous exploration, and dynamic action constraints were utilized in motion control. In this way, the local optimum can be avoided and the exploration efficacy can be ensured. Experiments with single connected and multi-branched regions demonstrated the high robustness, efficiency, and superiority of the developed system and methods.
基金supported by the National Natural Science Foundation of China(Project No.42171361)the Research Grants Council of the Hong Kong Special Administrative Region,China,under Project PolyU 25211819the Hong Kong Polytechnic University under Projects 1-ZE8E and 1-ZVN6.
文摘Light detection and ranging(LiDAR)has contributed immensely to forest mapping and 3D tree modelling.From the perspective of data acquisition,the integration of LiDAR data from different platforms would enrich forest information at the tree and plot levels.This research develops a general framework to integrate ground-based and UAV-LiDAR(ULS)data to better estimate tree parameters based on quantitative structure modelling(QSM).This is accomplished in three sequential steps.First,the ground-based/ULS LiDAR data were co-registered based on the local density peaks of the clustered canopy.Next,redundancy and noise were removed for the ground-based/ULS LiDAR data fusion.Finally,tree modeling and biophysical parameter retrieval were based on QSM.Experiments were performed for Backpack/Handheld/UAV-based multi-platform mobile LiDAR data of a subtropical forest,including poplar and dawn redwood species.Generally,ground-based/ULS LiDAR data fusion outperforms ground-based LiDAR with respect to tree parameter estimation compared to field data.The fusion-derived tree height,tree volume,and crown volume significantly improved by up to 9.01%,5.28%,and 18.61%,respectively,in terms of rRMSE.By contrast,the diameter at breast height(DBH)is the parameter that has the least benefits from fusion,and rRMSE remains approximately the same,because stems are already well sampled from ground data.Additionally,particularly for dense forests,the fusion-derived tree parameters were improved compared to those derived from ground-based LiDAR.Ground-based LiDAR can potentially be used to estimate tree parameters in low-stand-density forests,whereby the improvement owing to fusion is not significant.
基金supported by the National Natural Science Foundation of China(No.61976023)。
文摘In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.
文摘针对点云配准过程中,下采样时容易丢失关键点、影响配准精度的问题,本文提出一种基于特征融合和网络采样的配准方法,提高了配准的精度和速度。在PointNet分类网络基础上,引入小型注意力机制,设计一种基于深度学习网络的关键点提取方法,将局部特征和全局特征融合,得到混合特征的特征矩阵。通过深度学习实现对应矩阵求解中相关参数的自动优化,最后利用加权奇异值分解(singular value decomposition,SVD)得到变换矩阵,完成配准。在ModelNet40数据集上的实验表明,和最远点采样相比,所提算法耗时减少45.36%;而配准结果和基于特征学习的鲁棒点匹配(robust point matching using learned features,RPM-Net)相比,平移矩阵均方误差降低5.67%,旋转矩阵均方误差降低13.1%。在自制点云数据上的实验,证实了算法在真实物体上配准的有效性。