Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)...Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some ...Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.展开更多
Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionall...Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.展开更多
Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of ...Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface ...Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmenta...Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.展开更多
In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit so...In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.展开更多
输电线路的关键部位包括塔身、导线、绝缘子、避雷线以及引流线,无人机精细化导航的首要任务是构造输电线路的点云地图并从中分割出上述部位。为解决现有算法在输电线路的绝缘子、引流线等精细结构分割时精度低的问题,通过改进PointNet+...输电线路的关键部位包括塔身、导线、绝缘子、避雷线以及引流线,无人机精细化导航的首要任务是构造输电线路的点云地图并从中分割出上述部位。为解决现有算法在输电线路的绝缘子、引流线等精细结构分割时精度低的问题,通过改进PointNet++算法,提出了一种面向输电线路精细结构的点云分割方法。首先,基于无人机机载激光雷达在现场采集的点云数据,构造了输电线路点云分割数据集;其次,通过对比实验,筛选出在本输电线路场景下合理的数据增强方法,并对数据集进行了数据增强;最后,将自注意力机制以及倒置残差结构和PointNet++相结合,设计了输电线路关键部位点云语义分割算法。实验结果表明:该改进PointNet++算法在全场景输电线路现场点云数据作为输入的前提下,首次实现了对引流线、绝缘子等输电线路中精细结构和导线、杆塔塔身以及输电线路无关背景点的同时分割,平均交并比(mean intersection over union,mIoU)达80.79%,所有类别分割的平均F_(1)值(F1 score)达88.99%。展开更多
As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea ...As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.展开更多
With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data ...With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.展开更多
Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training exam...Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training examples has proven to be effective.In this paper,we propose a novel and effective approach called Farthest Point Sampling Mix(FPSMix)for augmenting point cloud data.Our method leverages farthest point sampling,a technique used in point cloud processing,to generate new samples by mixing points from two original point clouds.Another key innovation of our approach is the introduction of a significance-based loss function,which assigns weights to the soft labels of the mixed samples based on the classification loss of each part of the new sample that is separated from the two original point clouds.This way,our method takes into account the importance of different parts of the mixed sample during the training process,allowing the model to learn better global features.Experimental results demonstrate that our FPSMix,combined with the significance-based loss function,improves the classification accuracy of point cloud models and achieves comparable performance with state-of-the-art data augmentation methods.Moreover,our approach is complementary to techniques that focus on local features,and their combined use further enhances the classification accuracy of the baseline model.展开更多
Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for s...Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.展开更多
基金Guangxi Key Laboratory of Spatial Information and Geomatics(21-238-21-12)Guangxi Young and Middle-aged Teachers’Research Fundamental Ability Enhancement Project(2023KY1196).
文摘Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
基金supported by NTNU Digital project[grant number 81771593].
文摘Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.
基金Key Laboratory of Degraded and Unused Land Consolidation Engineering,Ministry of Natural Resources of China,Grant/Award Number:SXDJ2024-22Technology Innovation Centre for Integrated Applications in Remote Sensing and Navigation,Ministry of Natural Resources of China,Grant/Award Number:TICIARSN-2023-06+2 种基金National Natural Science Foundation of China,Grant/Award Numbers:42171446,62302246Zhejiang Provincial Natural Science Foundation of China,Grant/Award Number:LQ23F010008Science and Technology Program of Tianjin,China,Grant/Award Number:23ZGSSSS00010。
文摘Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.
基金supported by the projects found by the Jiangsu Transportation Science and Technology Project under Grants 2020Y191(1)Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grants KYCX23_0294。
文摘Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金supported by the Future Challenge Program through the Agency for Defense Development funded by the Defense Acquisition Program Administration (No.UC200015RD)。
文摘Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金supported by National Natural Science Foundation of China(61673186 and 61871196)Beijing Normal University Education Reform Project(jx2024040)Guangdong Undergraduate Universities Teaching Quality and Reform Project(jx2024309).
文摘Data augmentation plays an important role in boosting the performance of 3D models,while very few studies handle the 3D point cloud data with this technique.Global augmentation and cut-paste are commonly used augmentation techniques for point clouds,where global augmentation is applied to the entire point cloud of the scene,and cut-paste samples objects from other frames into the current frame.Both types of data augmentation can improve performance,but the cut-paste technique cannot effectively deal with the occlusion relationship between the foreground object and the background scene and the rationality of object sampling,which may be counterproductive and may hurt the overall performance.In addition,LiDAR is susceptible to signal loss,external occlusion,extreme weather and other factors,which can easily cause object shape changes,while global augmentation and cut-paste cannot effectively enhance the robustness of the model.To this end,we propose Syn-Aug,a synchronous data augmentation framework for LiDAR-based 3D object detection.Specifically,we first propose a novel rendering-based object augmentation technique(Ren-Aug)to enrich training data while enhancing scene realism.Second,we propose a local augmentation technique(Local-Aug)to generate local noise by rotating and scaling objects in the scene while avoiding collisions,which can improve generalisation performance.Finally,we make full use of the structural information of 3D labels to make the model more robust by randomly changing the geometry of objects in the training frames.We verify the proposed framework with four different types of 3D object detectors.Experimental results show that our proposed Syn-Aug significantly improves the performance of various 3D object detectors in the KITTI and nuScenes datasets,proving the effectiveness and generality of Syn-Aug.On KITTI,four different types of baseline models using Syn-Aug improved mAP by 0.89%,1.35%,1.61%and 1.14%respectively.On nuScenes,four different types of baseline models using Syn-Aug improved mAP by 14.93%,10.42%,8.47%and 6.81%respectively.The code is available at https://github.com/liuhuaijjin/Syn-Aug.
基金supported by the Innovation and Entrepreneurship Training Program Topic for College Students of North China University of Technology in 2023.
文摘In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.
文摘输电线路的关键部位包括塔身、导线、绝缘子、避雷线以及引流线,无人机精细化导航的首要任务是构造输电线路的点云地图并从中分割出上述部位。为解决现有算法在输电线路的绝缘子、引流线等精细结构分割时精度低的问题,通过改进PointNet++算法,提出了一种面向输电线路精细结构的点云分割方法。首先,基于无人机机载激光雷达在现场采集的点云数据,构造了输电线路点云分割数据集;其次,通过对比实验,筛选出在本输电线路场景下合理的数据增强方法,并对数据集进行了数据增强;最后,将自注意力机制以及倒置残差结构和PointNet++相结合,设计了输电线路关键部位点云语义分割算法。实验结果表明:该改进PointNet++算法在全场景输电线路现场点云数据作为输入的前提下,首次实现了对引流线、绝缘子等输电线路中精细结构和导线、杆塔塔身以及输电线路无关背景点的同时分割,平均交并比(mean intersection over union,mIoU)达80.79%,所有类别分割的平均F_(1)值(F1 score)达88.99%。
基金This project is supported by Provincial Technology Cooperation Program of Yunnan,China(No.2003EAAAA00D043).
文摘As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.
基金supported by the National Natural Science Foundation Project(No.42130105)Key Laboratory of Spatial-temporal Big Data Analysis and Application of Natural Resources in_Megacities,MNR(No.KFKT-2022-01).
文摘With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.
基金supported by the National Key R&D Program of China(No.2020YFB1708002)the National Natural Science Foundation of China(Grant Nos.62371009 and 61971008).
文摘Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training examples has proven to be effective.In this paper,we propose a novel and effective approach called Farthest Point Sampling Mix(FPSMix)for augmenting point cloud data.Our method leverages farthest point sampling,a technique used in point cloud processing,to generate new samples by mixing points from two original point clouds.Another key innovation of our approach is the introduction of a significance-based loss function,which assigns weights to the soft labels of the mixed samples based on the classification loss of each part of the new sample that is separated from the two original point clouds.This way,our method takes into account the importance of different parts of the mixed sample during the training process,allowing the model to learn better global features.Experimental results demonstrate that our FPSMix,combined with the significance-based loss function,improves the classification accuracy of point cloud models and achieves comparable performance with state-of-the-art data augmentation methods.Moreover,our approach is complementary to techniques that focus on local features,and their combined use further enhances the classification accuracy of the baseline model.
基金The work was supported by the International Foundation for Science(Grant No:I-1-D-60661).
文摘Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.