Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca...Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)...Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by...This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.展开更多
Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materia...Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionall...Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.展开更多
Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some ...Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.展开更多
In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit so...In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.展开更多
An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, clo...An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.展开更多
In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for ...In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for acquiring discontinuity measurements from 3D models,such as point clouds generated using laser scanning or photogrammetry.However,even with numerous automated and semiautomated methods presented in the literature,there is not one single method that can automatically characterize discontinuities accurately in a minimum of time.In this paper,we critically review all the existing methods proposed in the literature for the extraction of discontinuity characteristics such as joint sets and orientations,persistence,joint spacing,roughness and block size using point clouds,digital elevation maps,or meshes.As a result of this review,we identify the strengths and drawbacks of each method used for extracting those characteristics.We found that the approaches based on voxels and region growing are superior in extracting joint planes from 3D point clouds.Normal tensor voting with trace growth algorithm is a robust method for measuring joint trace length from 3D meshes.Spacing is estimated by calculating the perpendicular distance between joint planes.Several independent roughness indices are presented to quantify roughness from 3D surface models,but there is a need to incorporate these indices into automated methodologies.There is a lack of efficient algorithms for direct computation of block size from 3D rock mass surface models.展开更多
As point cloud of one whole vehicle body has the traits of large geometric dimension,huge data and rigorous reverse precision,one pretreatment algorithm on automobile body point cloud is put forward.The basic idea of ...As point cloud of one whole vehicle body has the traits of large geometric dimension,huge data and rigorous reverse precision,one pretreatment algorithm on automobile body point cloud is put forward.The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud,to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point(ICP)algorithm.The data reduction algorithm,based on average square root of distance,condenses data by three steps,computing datasets'average square root of distance in sampling cube grid,sorting order according to the value computed from the first step,choosing sampling percentage.The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.展开更多
With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data ...With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.展开更多
Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of ...Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.展开更多
Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for s...Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface ...Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.展开更多
基金supported by the National Key R&D Program of China(No.2023YFC3081200)the National Natural Science Foundation of China(No.42077264)the Scientific Research Project of PowerChina Huadong Engineering Corporation Limited(HDEC-2022-0301).
文摘Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金Guangxi Key Laboratory of Spatial Information and Geomatics(21-238-21-12)Guangxi Young and Middle-aged Teachers’Research Fundamental Ability Enhancement Project(2023KY1196).
文摘Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
基金supported by the Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China(Grant No.4182780021)Emeishan-Hanyuan Highway ProgramTaihang Mountain Highway Program。
文摘This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.
基金funded by the National Natural Science Foundation of China(42071014).
文摘Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金Key Laboratory of Degraded and Unused Land Consolidation Engineering,Ministry of Natural Resources of China,Grant/Award Number:SXDJ2024-22Technology Innovation Centre for Integrated Applications in Remote Sensing and Navigation,Ministry of Natural Resources of China,Grant/Award Number:TICIARSN-2023-06+2 种基金National Natural Science Foundation of China,Grant/Award Numbers:42171446,62302246Zhejiang Provincial Natural Science Foundation of China,Grant/Award Number:LQ23F010008Science and Technology Program of Tianjin,China,Grant/Award Number:23ZGSSSS00010。
文摘Semantic segmentation in the context of 3D point clouds for the railway environment holds a significant economic value,but its development is severely hindered by the lack of suitable and specific datasets.Additionally,the models trained on existing urban road point cloud datasets demonstrate poor generalisation on railway data due to a large domain gap caused by non-overlapping special/rare categories,for example,rail track,track bed etc.To harness the potential of supervised learning methods in the domain of 3D railway semantic segmentation,we introduce RailPC,a new point cloud benchmark.RailPC provides a large-scale dataset with rich annotations for semantic segmentation in the railway environment.Notably,RailPC contains twice the number of annotated points compared to the largest available mobile laser scanning(MLS)point cloud dataset and is the first railway-specific 3D dataset for semantic segmentation.It covers a total of nearly 25 km railway in two different scenes(urban and mountain),with 3 billion points that are finely labelled as 16 most typical classes with respect to railway,and the data acquisition process is completed in China by MLS systems.Through extensive experimentation,we evaluate the performance of advanced scene understanding methods on the annotated dataset and present a synthetic analysis of semantic segmentation results.Based on our findings,we establish some critical challenges towards railway-scale point cloud semantic segmentation.The dataset is available at https://github.com/NNU-GISA/GISA-RailPC,and we will continuously update it based on community feedback.
基金supported by NTNU Digital project[grant number 81771593].
文摘Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.
基金supported by the Innovation and Entrepreneurship Training Program Topic for College Students of North China University of Technology in 2023.
文摘In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.
基金Project(51274250)supported by the National Natural Science Foundation of ChinaProject(2012BAK09B02-05)supported by the National Key Technology R&D Program during the 12th Five-year Plan of China
文摘An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.
基金funded by the U.S.National Institute for Occupational Safety and Health(NIOSH)under the Contract No.75D30119C06044。
文摘In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for acquiring discontinuity measurements from 3D models,such as point clouds generated using laser scanning or photogrammetry.However,even with numerous automated and semiautomated methods presented in the literature,there is not one single method that can automatically characterize discontinuities accurately in a minimum of time.In this paper,we critically review all the existing methods proposed in the literature for the extraction of discontinuity characteristics such as joint sets and orientations,persistence,joint spacing,roughness and block size using point clouds,digital elevation maps,or meshes.As a result of this review,we identify the strengths and drawbacks of each method used for extracting those characteristics.We found that the approaches based on voxels and region growing are superior in extracting joint planes from 3D point clouds.Normal tensor voting with trace growth algorithm is a robust method for measuring joint trace length from 3D meshes.Spacing is estimated by calculating the perpendicular distance between joint planes.Several independent roughness indices are presented to quantify roughness from 3D surface models,but there is a need to incorporate these indices into automated methodologies.There is a lack of efficient algorithms for direct computation of block size from 3D rock mass surface models.
基金This project is supported by Provincial Technology Cooperation Program of Yunnan,China(No.2003EAAAA00D043).
文摘As point cloud of one whole vehicle body has the traits of large geometric dimension,huge data and rigorous reverse precision,one pretreatment algorithm on automobile body point cloud is put forward.The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud,to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point(ICP)algorithm.The data reduction algorithm,based on average square root of distance,condenses data by three steps,computing datasets'average square root of distance in sampling cube grid,sorting order according to the value computed from the first step,choosing sampling percentage.The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.
基金supported by the National Natural Science Foundation Project(No.42130105)Key Laboratory of Spatial-temporal Big Data Analysis and Application of Natural Resources in_Megacities,MNR(No.KFKT-2022-01).
文摘With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.
基金supported by the projects found by the Jiangsu Transportation Science and Technology Project under Grants 2020Y191(1)Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grants KYCX23_0294。
文摘Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.
基金The work was supported by the International Foundation for Science(Grant No:I-1-D-60661).
文摘Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金supported by the Future Challenge Program through the Agency for Defense Development funded by the Defense Acquisition Program Administration (No.UC200015RD)。
文摘Swarm robot systems are an important application of autonomous unmanned surface vehicles on water surfaces.For monitoring natural environments and conducting security activities within a certain range using a surface vehicle,the swarm robot system is more efficient than the operation of a single object as the former can reduce cost and save time.It is necessary to detect adjacent surface obstacles robustly to operate a cluster of unmanned surface vehicles.For this purpose,a LiDAR(light detection and ranging)sensor is used as it can simultaneously obtain 3D information for all directions,relatively robustly and accurately,irrespective of the surrounding environmental conditions.Although the GPS(global-positioning-system)error range exists,obtaining measurements of the surface-vessel position can still ensure stability during platoon maneuvering.In this study,a three-layer convolutional neural network is applied to classify types of surface vehicles.The aim of this approach is to redefine the sparse 3D point cloud data as 2D image data with a connotative meaning and subsequently utilize this transformed data for object classification purposes.Hence,we have proposed a descriptor that converts the 3D point cloud data into 2D image data.To use this descriptor effectively,it is necessary to perform a clustering operation that separates the point clouds for each object.We developed voxel-based clustering for the point cloud clustering.Furthermore,using the descriptor,3D point cloud data can be converted into a 2D feature image,and the converted 2D image is provided as an input value to the network.We intend to verify the validity of the proposed 3D point cloud feature descriptor by using experimental data in the simulator.Furthermore,we explore the feasibility of real-time object classification within this framework.