Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)...Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametr...Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.展开更多
Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of disco...Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.展开更多
Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more compl...Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more complex.Secondly,the quality impairment generally involves both geometric attributes and color properties,where the measurement of the geometric distortion becomes more complex.We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System(HVS)and the intrinsic characteristics of the point cloud.The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling,and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors(KNN)clustering.For geometric distortion,the Point Feature Histogram(PFH)is extracted as the feature descriptor,and the Earth Mover’s Distance(EMD)between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement.For color distortion,the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement.Finally,the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement.The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time.Meanwhile,the experimental results also demonstrate the robustness of the proposed method across various distortion types.The source codes are available at https://github.com/llsurreal919/Point Cloud Quality Assessment.展开更多
The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,direc...The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.展开更多
Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing di...Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing distinct chain ends.Despite these structural differences,both types of polymers exhibit locally random-walk-like conformations,making it challenging to detect subtle spatial variations using conventional methods.In this study,we address this challenge by integrating molecular dynamics simulations with point cloud neural networks to analyze the spatial conformations of cyclic and linear polymers.By utilizing the Dynamic Graph CNN(DGCNN)model,we classify polymer conformations based on the 3D coordinates of monomers,capturing local and global topological differences without considering chain connectivity sequentiality.Our findings reveal that the optimal local structural feature unit size scales linearly with molecular weight,aligning with theoretical predictions.Additionally,interpretability techniques such as Grad-CAM and SHAP identify significant conformational differences:cyclic polymers tend to form prolate ellipsoid shapes with pronounced elongation along the major axis,while linear polymers show elongated ends with more spherical centers.These findings reveal subtle yet critical differences in local conformations between cyclic and linear polymers that were previously difficult to discern,providing deeper insights into polymer structure-property relationships and offering guidance for future polymer science advancements.展开更多
Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca...Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.展开更多
This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for...This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for feature extraction from the PC.The effects of different neighborhood sizes(k=5,10,20,50,and 100)have been evaluated to assess their impact on classification performance.After that,17 geometric and spatial features were extracted from the PC.Next,ensemble methods,AdaBoost.M2,random forest,and decision tree,have been compared with Artificial Neural Networks to classify the main discontinuity sets.The McNemar test indicates that the classifiers are statistically significant.The random forest classifier consistently achieves the highest performance with an accuracy exceeding 95%when using a neighborhood size of k=100,while recall,F-score,and Cohen's Kappa also demonstrate high success.SHapley Additive exPlanations(SHAP),an Explainable AI technique,has been used to evaluate feature importance and improve the explainability of black-box machine learning models in the context of rock discontinuity classification.The analysis reveals that features such as normal vectors,verticality,and Z-values have the greatest influence on identifying main discontinuity sets,while linearity,planarity,and eigenvalues contribute less,making the model more transparent and easier to understand.After classification,individual discontinuity sets were detected using a revised DBSCAN from the main discontinuity sets.Finally,the orientation parameters of the plane fitted to each discontinuity were derived from the plane parameters obtained using the Random Sample Consensus(RANSAC).Two real-world datasets(obtained from SfM and LiDAR)and one synthetic dataset were used to validate the proposed method,which successfully identified rock discontinuities and their orientation parameters(dip angle/direction).展开更多
In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCI...In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the nove...Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the novel classes.Due to imbalanced training data,existing FS3D methods based on fully supervised learning can lead to overfitting toward base classes,which impairs the network’s ability to generalize knowledge learned from base classes to novel classes and also prevents the network from extracting distinctive foreground and background representations for novel class objects.To address these issues,this thesis proposes a category-agnostic contrastive learning approach,enhancing the generalization and identification abilities for almost unseen categories through the construction of pseudo-labels and positive-negative sample pairs unrelated to specific classes.Firstly,this thesis designs a proposal-wise context contrastive module(CCM).By reducing the distance between foreground point features and increasing the distance between foreground and background point features within a region proposal,CCM aids the network in extracting more discriminative foreground and background feature representations without reliance on categorical annotations.Secondly,this thesis utilizes a geometric contrastive module(GCM),which enhances the network’s geometric perception capability by employing contrastive learning on the foreground point features associated with various basic geometric components,such as edges,corners,and surfaces,thereby enabling these geometric components to exhibit more distinguishable representations.This thesis also combines category-aware contrastive learning with former modules to maintain categorical distinctiveness.Extensive experimental results on FS-SUNRGBD and FS-ScanNet datasets demonstrate the effectiveness of this method with average precision exceeding the baseline by up to 8%.展开更多
Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected ...Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.展开更多
Volume parameter is the basic content of a spatial body object morphology analysis.However,the challenge lies in the volume calculation of irregular objects.The point cloud slicing method proposed in this study effect...Volume parameter is the basic content of a spatial body object morphology analysis.However,the challenge lies in the volume calculation of irregular objects.The point cloud slicing method proposed in this study effectively works in calculating the volume of the point cloud of the spatial object obtained through three-dimensional laser scanning(3DLS).In this method,a uniformly spaced sequent slicing process is first conducted in a specific direction on the point cloud of the spatial object obtained through 3DLS.A series of discrete point cloud slices corresponding to the point cloud bodies are then obtained.Subsequently,the outline boundary polygon of the point cloud slicing is searched one by one in accordance with the slicing sequence and areas of the polygon.The point cloud slice is also calculated.Finally,the individual point cloud section volume is calculated through the slicing areas and the adjacent slicing gap.Thus,the total volume of the scanned spatial object can be calculated by summing up the individual volumes.According to the results and analysis of the calculated examples,the slice-based volume-calculating method for the point cloud of irregular objects obtained through 3DLS is correct,concise in process,reliable in results,efficient in calculation methods,and controllable on accuracy.This method comes as a good solution to the volume calculation of irregular objects.展开更多
Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir...Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.展开更多
In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is...In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.展开更多
Based on Bayesian theory and RANSAC,this paper applies Bayesian Sampling Consensus(BaySAC)method using convergence evaluation of hypothesis models in indoor point cloud processing.We implement a conditional sampling m...Based on Bayesian theory and RANSAC,this paper applies Bayesian Sampling Consensus(BaySAC)method using convergence evaluation of hypothesis models in indoor point cloud processing.We implement a conditional sampling method,BaySAC,to always select the minimum number of required data with the highest inlier probabilities.Because the primitive parameters calculated by the different inlier sets should be convergent,this paper presents a statistical testing algorithm for a candidate model parameter histogram to compute the prior probability of each data point.Moreover,the probability update is implemented using the simplified Bayes’formula.The performances of the BaySAC algorithm with the proposed strategies of the prior probability determination and the RANSAC framework are compared using real data-sets.The experimental results indicate that the more outliers contain the data points,the higher computational efficiency of our proposed algorithm gains compared with RANSAC.The results also indicate that the proposed statistical testing strategy can determine sound prior inlier probability free of the change of hypothesis models.展开更多
With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data ...With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.展开更多
To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-sca...To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.展开更多
As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clo...As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clouds has become an urgent problem to be solved.The point cloud geometric information is hidden in disordered,unstructured points,making point cloud analysis a very challenging problem.To address this problem,we propose a novel network framework,called Tree Graph Network(TGNet),which can sample,group,and aggregate local geometric features.Specifically,we construct a Tree Graph by explicit rules,which consists of curves extending in all directions in point cloud feature space,and then aggregate the features of the graph through a cross-attention mechanism.In this way,we incorporate more point cloud geometric structure information into the representation of local geometric features,which makes our network perform better.Our model performs well on several basic point clouds processing tasks such as classification,segmentation,and normal estimation,demonstrating the effectiveness and superiority of our network.Furthermore,we provide ablation experiments and visualizations to better understand our network.展开更多
基金Guangxi Key Laboratory of Spatial Information and Geomatics(21-238-21-12)Guangxi Young and Middle-aged Teachers’Research Fundamental Ability Enhancement Project(2023KY1196).
文摘Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
文摘Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.
基金supported by the National Key Research and Development Program of China(Grant Nos.2023YFC2907400 and 2021YFC2900500)the National Natural Science Foundation of China(Grant No.52074020).
文摘Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.
基金supported in part by the National Natural Science Foundation of China under Grant(62171257,U22B2001,U19A2052,62020106011,62061015)in part by the Natural Science Foundation of Chongqing under Grant(2023NSCQMSX2930)+1 种基金in part by the Youth Innovation Group Support Program of ICE Discipline of CQUPT under Grant(SCIE-QN-2022-05)in part by the Graduate Scientifc Research and Innovation Project of Chongqing under Grant(CYS22469).
文摘Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more complex.Secondly,the quality impairment generally involves both geometric attributes and color properties,where the measurement of the geometric distortion becomes more complex.We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System(HVS)and the intrinsic characteristics of the point cloud.The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling,and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors(KNN)clustering.For geometric distortion,the Point Feature Histogram(PFH)is extracted as the feature descriptor,and the Earth Mover’s Distance(EMD)between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement.For color distortion,the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement.Finally,the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement.The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time.Meanwhile,the experimental results also demonstrate the robustness of the proposed method across various distortion types.The source codes are available at https://github.com/llsurreal919/Point Cloud Quality Assessment.
基金National Key Research and Development Program of China,Grant/Award Number:2020YFB1711704。
文摘The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.
基金the National Key R&D Program of China(No.2022YFB3707303)National Natural Science Foundation of China(No.52293471)。
文摘Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing distinct chain ends.Despite these structural differences,both types of polymers exhibit locally random-walk-like conformations,making it challenging to detect subtle spatial variations using conventional methods.In this study,we address this challenge by integrating molecular dynamics simulations with point cloud neural networks to analyze the spatial conformations of cyclic and linear polymers.By utilizing the Dynamic Graph CNN(DGCNN)model,we classify polymer conformations based on the 3D coordinates of monomers,capturing local and global topological differences without considering chain connectivity sequentiality.Our findings reveal that the optimal local structural feature unit size scales linearly with molecular weight,aligning with theoretical predictions.Additionally,interpretability techniques such as Grad-CAM and SHAP identify significant conformational differences:cyclic polymers tend to form prolate ellipsoid shapes with pronounced elongation along the major axis,while linear polymers show elongated ends with more spherical centers.These findings reveal subtle yet critical differences in local conformations between cyclic and linear polymers that were previously difficult to discern,providing deeper insights into polymer structure-property relationships and offering guidance for future polymer science advancements.
基金supported by the National Key R&D Program of China(No.2023YFC3081200)the National Natural Science Foundation of China(No.42077264)the Scientific Research Project of PowerChina Huadong Engineering Corporation Limited(HDEC-2022-0301).
文摘Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.
文摘This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for feature extraction from the PC.The effects of different neighborhood sizes(k=5,10,20,50,and 100)have been evaluated to assess their impact on classification performance.After that,17 geometric and spatial features were extracted from the PC.Next,ensemble methods,AdaBoost.M2,random forest,and decision tree,have been compared with Artificial Neural Networks to classify the main discontinuity sets.The McNemar test indicates that the classifiers are statistically significant.The random forest classifier consistently achieves the highest performance with an accuracy exceeding 95%when using a neighborhood size of k=100,while recall,F-score,and Cohen's Kappa also demonstrate high success.SHapley Additive exPlanations(SHAP),an Explainable AI technique,has been used to evaluate feature importance and improve the explainability of black-box machine learning models in the context of rock discontinuity classification.The analysis reveals that features such as normal vectors,verticality,and Z-values have the greatest influence on identifying main discontinuity sets,while linearity,planarity,and eigenvalues contribute less,making the model more transparent and easier to understand.After classification,individual discontinuity sets were detected using a revised DBSCAN from the main discontinuity sets.Finally,the orientation parameters of the plane fitted to each discontinuity were derived from the plane parameters obtained using the Random Sample Consensus(RANSAC).Two real-world datasets(obtained from SfM and LiDAR)and one synthetic dataset were used to validate the proposed method,which successfully identified rock discontinuities and their orientation parameters(dip angle/direction).
基金the National Natural Science Foundation of China(Nos.42176188,42176192)the Hainan Provincial Natural Science Foundation of China(No.421CXTD442)+2 种基金the Stable Supporting Fund of Acoustic Science and Technology Laboratory(No.JCKYS2024604SSJS007)the Fundamental Research Funds for the Central Universities(No.3072024CFJ0504)the Harbin Engineering University Doctoral Research and Innovation Fund(No.XK2050021034)。
文摘In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
文摘Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the novel classes.Due to imbalanced training data,existing FS3D methods based on fully supervised learning can lead to overfitting toward base classes,which impairs the network’s ability to generalize knowledge learned from base classes to novel classes and also prevents the network from extracting distinctive foreground and background representations for novel class objects.To address these issues,this thesis proposes a category-agnostic contrastive learning approach,enhancing the generalization and identification abilities for almost unseen categories through the construction of pseudo-labels and positive-negative sample pairs unrelated to specific classes.Firstly,this thesis designs a proposal-wise context contrastive module(CCM).By reducing the distance between foreground point features and increasing the distance between foreground and background point features within a region proposal,CCM aids the network in extracting more discriminative foreground and background feature representations without reliance on categorical annotations.Secondly,this thesis utilizes a geometric contrastive module(GCM),which enhances the network’s geometric perception capability by employing contrastive learning on the foreground point features associated with various basic geometric components,such as edges,corners,and surfaces,thereby enabling these geometric components to exhibit more distinguishable representations.This thesis also combines category-aware contrastive learning with former modules to maintain categorical distinctiveness.Extensive experimental results on FS-SUNRGBD and FS-ScanNet datasets demonstrate the effectiveness of this method with average precision exceeding the baseline by up to 8%.
基金supported by the National Natural Science Foundation of China(Grant No.42407232)the Sichuan Science and Technology Program(Grant No.2024NSFSC0826).
文摘Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.
文摘Volume parameter is the basic content of a spatial body object morphology analysis.However,the challenge lies in the volume calculation of irregular objects.The point cloud slicing method proposed in this study effectively works in calculating the volume of the point cloud of the spatial object obtained through three-dimensional laser scanning(3DLS).In this method,a uniformly spaced sequent slicing process is first conducted in a specific direction on the point cloud of the spatial object obtained through 3DLS.A series of discrete point cloud slices corresponding to the point cloud bodies are then obtained.Subsequently,the outline boundary polygon of the point cloud slicing is searched one by one in accordance with the slicing sequence and areas of the polygon.The point cloud slice is also calculated.Finally,the individual point cloud section volume is calculated through the slicing areas and the adjacent slicing gap.Thus,the total volume of the scanned spatial object can be calculated by summing up the individual volumes.According to the results and analysis of the calculated examples,the slice-based volume-calculating method for the point cloud of irregular objects obtained through 3DLS is correct,concise in process,reliable in results,efficient in calculation methods,and controllable on accuracy.This method comes as a good solution to the volume calculation of irregular objects.
文摘Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.
基金This work was supported by National Nature Science Foundation of China(No.61811530281 and 61861136009)Guangdong Regional Joint Foundation(No.2019B1515120076)the Fundamental Research for the Central Universities.
文摘In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.
基金This research was supported by the National Natural Science Foundation of China[grant number 41471360]the Fundamental Research Funds for the Central Universities[grant number 2652015176].
文摘Based on Bayesian theory and RANSAC,this paper applies Bayesian Sampling Consensus(BaySAC)method using convergence evaluation of hypothesis models in indoor point cloud processing.We implement a conditional sampling method,BaySAC,to always select the minimum number of required data with the highest inlier probabilities.Because the primitive parameters calculated by the different inlier sets should be convergent,this paper presents a statistical testing algorithm for a candidate model parameter histogram to compute the prior probability of each data point.Moreover,the probability update is implemented using the simplified Bayes’formula.The performances of the BaySAC algorithm with the proposed strategies of the prior probability determination and the RANSAC framework are compared using real data-sets.The experimental results indicate that the more outliers contain the data points,the higher computational efficiency of our proposed algorithm gains compared with RANSAC.The results also indicate that the proposed statistical testing strategy can determine sound prior inlier probability free of the change of hypothesis models.
基金supported by the National Natural Science Foundation Project(No.42130105)Key Laboratory of Spatial-temporal Big Data Analysis and Application of Natural Resources in_Megacities,MNR(No.KFKT-2022-01).
文摘With the rapid development of reality capture methods,such as laser scanning and oblique photogrammetry,point cloud data have become the third most important data source,after vector maps and imagery.Point cloud data also play an increasingly important role in scientific research and engineering in the fields of Earth science,spatial cognition,and smart cities.However,how to acquire high-quality three-dimensional(3D)geospatial information from point clouds has become a scientific frontier,for which there is an urgent demand in the fields of surveying and mapping,as well as geoscience applications.To address the challenges mentioned above,point cloud intelligence came into being.This paper summarizes the state-of-the-art of point cloud intelligence,with regard to acquisition equipment,intelligent processing,scientific research,and engineering applications.For this purpose,we refer to a recent project on the hybrid georeferencing of images and LiDAR data for high-quality point cloud collection,as well as a current benchmark for the semantic segmentation of high-resolution 3D point clouds.These projects were conducted at the Institute for Photogrammetry,the University of Stuttgart,which was initially headed by the late Prof.Ackermann.Finally,the development prospects of point cloud intelligence are summarized.
文摘To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.
基金supported by the National Natural Science Foundation of China (Grant Nos.91948203,52075532).
文摘As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clouds has become an urgent problem to be solved.The point cloud geometric information is hidden in disordered,unstructured points,making point cloud analysis a very challenging problem.To address this problem,we propose a novel network framework,called Tree Graph Network(TGNet),which can sample,group,and aggregate local geometric features.Specifically,we construct a Tree Graph by explicit rules,which consists of curves extending in all directions in point cloud feature space,and then aggregate the features of the graph through a cross-attention mechanism.In this way,we incorporate more point cloud geometric structure information into the representation of local geometric features,which makes our network perform better.Our model performs well on several basic point clouds processing tasks such as classification,segmentation,and normal estimation,demonstrating the effectiveness and superiority of our network.Furthermore,we provide ablation experiments and visualizations to better understand our network.