Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca...Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.展开更多
This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by...This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materia...Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.展开更多
The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification ...The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification methods have some shortcomings when extracting point cloud features, such as insufficient extraction of local information and overlooking the information in other neighborhood features in the point cloud, and not focusing on the point cloud channel information and spatial information. To solve the above problems, a point cloud classification network based on graph convolution and fusion attention mechanism is proposed to achieve more accurate classification results. Firstly, the point cloud is regarded as a node on the graph, the k-nearest neighbor algorithm is used to compose the graph and the information between points is dynamically captured by stacking multiple graph convolution layers;then, with the assistance of 2D experience of attention mechanism, an attention mechanism which has the capability to integrate more attention to point cloud spatial and channel information is introduced to increase the feature information of point cloud, aggregate local useful features and suppress useless features. Through the classification experiments on ModelNet40 dataset, the experimental results show that compared with PointNet network without considering the local feature information of the point cloud, the average classification accuracy of the proposed model has a 4.4% improvement and the overall classification accuracy has a 4.4% improvement. Compared with other networks, the classification accuracy of the proposed model has also been improved.展开更多
A subject who wears a suitable robotic device will be able to walk in complex environments with the aid of environmental recognition schemes that provide reliable prior information of the human motion intent.Researche...A subject who wears a suitable robotic device will be able to walk in complex environments with the aid of environmental recognition schemes that provide reliable prior information of the human motion intent.Researchers have utilized 1 D laser signals and 2 D depth images to classify environments,but those approaches can face the problems of self-occlusion.In comparison,3 D point cloud is more appropriate for depicting the environments.This paper proposes a directional PointNet to directly classify the 3 D point cloud.First,an inertial measurement unit(IMU)is used to offset the orientation of point cloud.Then the directional PointNet can accurately classify the daily commuted terrains,including level ground,climbing up stairways,and walking down stairs.A classification accuracy of 98%has been achieved in tests.Moreover,the directional PointNet is more efficient than the previously used PointNet because the T-net,which is utilized to estimate the transformation of the point cloud,is not used in the present approach,and the length of the global feature is optimized.The experimental results demonstrate that the directional PointNet can classify the environments in robust and efficient manner.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, clo...An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.展开更多
In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for ...In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for acquiring discontinuity measurements from 3D models,such as point clouds generated using laser scanning or photogrammetry.However,even with numerous automated and semiautomated methods presented in the literature,there is not one single method that can automatically characterize discontinuities accurately in a minimum of time.In this paper,we critically review all the existing methods proposed in the literature for the extraction of discontinuity characteristics such as joint sets and orientations,persistence,joint spacing,roughness and block size using point clouds,digital elevation maps,or meshes.As a result of this review,we identify the strengths and drawbacks of each method used for extracting those characteristics.We found that the approaches based on voxels and region growing are superior in extracting joint planes from 3D point clouds.Normal tensor voting with trace growth algorithm is a robust method for measuring joint trace length from 3D meshes.Spacing is estimated by calculating the perpendicular distance between joint planes.Several independent roughness indices are presented to quantify roughness from 3D surface models,but there is a need to incorporate these indices into automated methodologies.There is a lack of efficient algorithms for direct computation of block size from 3D rock mass surface models.展开更多
As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of t...As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of that point cloud data is sparse, disordered and rotation-invariant, the success of convolutional neural network in 2 D image cannot be directly reproduced on point cloud. In this paper, we propose WECNN, namely, Weight-Edge Convolution Neural Network, which has an excellent ability to utilize local structural features. As the core of WECNN, a novel convolution operator called WEConv tries to capture structural features by constructing a fixed number of directed graphs and extracting the edge information of the graph to further analyze the local regions of point cloud. Moreover, a weight function is designed for different tasks to assign weights to the edges, so that feature extractions on the edges can be more fine-grained and robust. WECNN gets overall accuracy of 93.8% and mean class accuracy of 91.6% on Model Net40 dataset. At the same time, it gets a mean Io U of 85.5% on Shape Net Part dataset. Results of extensive experiments show that our WECNN outperforms other classification and segmentation approaches on challenging benchmarks.展开更多
Precise classification of Light Detection and Ranging(LiDAR)point cloud is a fundamental process in various applications,such as land cover mapping,forestry management,and autonomous driving.Due to the lack of spectra...Precise classification of Light Detection and Ranging(LiDAR)point cloud is a fundamental process in various applications,such as land cover mapping,forestry management,and autonomous driving.Due to the lack of spectral information,the existing research on single wavelength LiDAR classification is limited.Spectral information from images could address this limitation,but data fusion suffers from varying illumination conditions and the registration problem.A novel multispectral LiDAR successfully obtains spatial and spectral information as a brand-new data type,namely,multispectral point cloud,thereby improving classification performance.However,spatial and spectral information of multispectral LiDAR has been processed separately in previous studies,thereby possibly limiting the classification performance of multispectral LiDAR.To explore the potential of this new data type,the current spatial-spectral classification framework for multispectral LiDAR that includes four steps:(1)neighborhood selection,(2)feature extraction and selection,(3)classification,and(4)label smoothing.Three novel highlights were proposed in this spatial-spectral classification framework.(1)We improved the popular eigen entropy-based neighborhood selection by spectral angle match to extract a more precise neighborhood.(2)We evaluated the importance of geometric and spectral features to compare their contributions and selected the most important features to reduce feature redundancy.(3)We conducted spatial label smoothing by a conditional random field,accounting for the spatial and spectral information of the neighborhood points.The proposed method demonstrated by a multispectral LiDAR with three channels:466 nm(blue),527 nm(green),and 628 nm(red).Experimental results demonstrate the effectiveness of the proposed spatial-spectral classification framework.Moreover,this research takes advantages of the complementation of spatial and spectral information,which could benefit more precise neighborhood selection,more effective features,and satisfactory refinement of classification result.Finally,this study could serve as an inspiration for future efficient spatial-spectral process for multispectral point cloud.展开更多
Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training exam...Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training examples has proven to be effective.In this paper,we propose a novel and effective approach called Farthest Point Sampling Mix(FPSMix)for augmenting point cloud data.Our method leverages farthest point sampling,a technique used in point cloud processing,to generate new samples by mixing points from two original point clouds.Another key innovation of our approach is the introduction of a significance-based loss function,which assigns weights to the soft labels of the mixed samples based on the classification loss of each part of the new sample that is separated from the two original point clouds.This way,our method takes into account the importance of different parts of the mixed sample during the training process,allowing the model to learn better global features.Experimental results demonstrate that our FPSMix,combined with the significance-based loss function,improves the classification accuracy of point cloud models and achieves comparable performance with state-of-the-art data augmentation methods.Moreover,our approach is complementary to techniques that focus on local features,and their combined use further enhances the classification accuracy of the baseline model.展开更多
Nowadays,there has been a growing trend in the field of high-energy physics(HEP),in both its experimental and phenomenological studies,to incorporate machine learning(ML)and its specialized branch,deep learning(DL).Th...Nowadays,there has been a growing trend in the field of high-energy physics(HEP),in both its experimental and phenomenological studies,to incorporate machine learning(ML)and its specialized branch,deep learning(DL).This review paper provides a thorough illustration of these applications using different ML and DL approaches.The first part of the paper examines the basics of various particle physics types and establishes guidelines for assessing particle physics alongside the available learning models.Next,a detailed classification is provided for representing Jets that are reconstructed in high-energy collisions,mainly in proton-proton collisions at well-defined beam energies.This section covers various datasets,preprocessing techniques,and feature extraction and selection methods.The presented techniques can be applied to future hadron–hadron colliders(HHC),such as the high-luminosity LHC(HL-LHC)and the future circular collider–hadron–hadron(FCC-hh).The authors then explore several AI techniques analyses designed specifically for both image and point-cloud(PC)data in HEP.Additionally,a closer look is taken at the classification associated with Jet tagging in hadron collisions.In this review,various state-of-the-art(SOTA)techniques in ML and DL are examined,with a focus on their implications for HEP demands.More precisely,this discussion addresses various applications in extensive detail,such as Jet tagging,Jet tracking,and particle classification.The review concludes with an analysis of the current state of HEP using DL methodologies.It highlights the challenges and potential areas for future research,which are illustrated for each application.展开更多
基金supported by the National Key R&D Program of China(No.2023YFC3081200)the National Natural Science Foundation of China(No.42077264)the Scientific Research Project of PowerChina Huadong Engineering Corporation Limited(HDEC-2022-0301).
文摘Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.
基金supported by the Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China(Grant No.4182780021)Emeishan-Hanyuan Highway ProgramTaihang Mountain Highway Program。
文摘This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金funded by the National Natural Science Foundation of China(42071014).
文摘Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.
文摘The classification of point cloud data is the key technology of point cloud data information acquisition and 3D reconstruction, which has a wide range of applications. However, the existing point cloud classification methods have some shortcomings when extracting point cloud features, such as insufficient extraction of local information and overlooking the information in other neighborhood features in the point cloud, and not focusing on the point cloud channel information and spatial information. To solve the above problems, a point cloud classification network based on graph convolution and fusion attention mechanism is proposed to achieve more accurate classification results. Firstly, the point cloud is regarded as a node on the graph, the k-nearest neighbor algorithm is used to compose the graph and the information between points is dynamically captured by stacking multiple graph convolution layers;then, with the assistance of 2D experience of attention mechanism, an attention mechanism which has the capability to integrate more attention to point cloud spatial and channel information is introduced to increase the feature information of point cloud, aggregate local useful features and suppress useless features. Through the classification experiments on ModelNet40 dataset, the experimental results show that compared with PointNet network without considering the local feature information of the point cloud, the average classification accuracy of the proposed model has a 4.4% improvement and the overall classification accuracy has a 4.4% improvement. Compared with other networks, the classification accuracy of the proposed model has also been improved.
文摘A subject who wears a suitable robotic device will be able to walk in complex environments with the aid of environmental recognition schemes that provide reliable prior information of the human motion intent.Researchers have utilized 1 D laser signals and 2 D depth images to classify environments,but those approaches can face the problems of self-occlusion.In comparison,3 D point cloud is more appropriate for depicting the environments.This paper proposes a directional PointNet to directly classify the 3 D point cloud.First,an inertial measurement unit(IMU)is used to offset the orientation of point cloud.Then the directional PointNet can accurately classify the daily commuted terrains,including level ground,climbing up stairways,and walking down stairs.A classification accuracy of 98%has been achieved in tests.Moreover,the directional PointNet is more efficient than the previously used PointNet because the T-net,which is utilized to estimate the transformation of the point cloud,is not used in the present approach,and the length of the global feature is optimized.The experimental results demonstrate that the directional PointNet can classify the environments in robust and efficient manner.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
基金Project(51274250)supported by the National Natural Science Foundation of ChinaProject(2012BAK09B02-05)supported by the National Key Technology R&D Program during the 12th Five-year Plan of China
文摘An integration processing system of three-dimensional laser scanning information visualization in goaf was developed. It is provided with multiple functions, such as laser scanning information management for goaf, cloud data de-noising optimization, construction, display and operation of three-dimensional model, model editing, profile generation, calculation of goaf volume and roof area, Boolean calculation among models and interaction with the third party soft ware. Concerning this system with a concise interface, plentiful data input/output interfaces, it is featured with high integration, simple and convenient operations of applications. According to practice, in addition to being well-adapted, this system is favorably reliable and stable.
基金funded by the U.S.National Institute for Occupational Safety and Health(NIOSH)under the Contract No.75D30119C06044。
文摘In the last two decades,significant research has been conducted in the field of automated extraction of rock mass discontinuity characteristics from three-dimensional(3D)models.This provides several methodologies for acquiring discontinuity measurements from 3D models,such as point clouds generated using laser scanning or photogrammetry.However,even with numerous automated and semiautomated methods presented in the literature,there is not one single method that can automatically characterize discontinuities accurately in a minimum of time.In this paper,we critically review all the existing methods proposed in the literature for the extraction of discontinuity characteristics such as joint sets and orientations,persistence,joint spacing,roughness and block size using point clouds,digital elevation maps,or meshes.As a result of this review,we identify the strengths and drawbacks of each method used for extracting those characteristics.We found that the approaches based on voxels and region growing are superior in extracting joint planes from 3D point clouds.Normal tensor voting with trace growth algorithm is a robust method for measuring joint trace length from 3D meshes.Spacing is estimated by calculating the perpendicular distance between joint planes.Several independent roughness indices are presented to quantify roughness from 3D surface models,but there is a need to incorporate these indices into automated methodologies.There is a lack of efficient algorithms for direct computation of block size from 3D rock mass surface models.
基金Supported by the National Natural Science Foundation of China (61772328)。
文摘As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of that point cloud data is sparse, disordered and rotation-invariant, the success of convolutional neural network in 2 D image cannot be directly reproduced on point cloud. In this paper, we propose WECNN, namely, Weight-Edge Convolution Neural Network, which has an excellent ability to utilize local structural features. As the core of WECNN, a novel convolution operator called WEConv tries to capture structural features by constructing a fixed number of directed graphs and extracting the edge information of the graph to further analyze the local regions of point cloud. Moreover, a weight function is designed for different tasks to assign weights to the edges, so that feature extractions on the edges can be more fine-grained and robust. WECNN gets overall accuracy of 93.8% and mean class accuracy of 91.6% on Model Net40 dataset. At the same time, it gets a mean Io U of 85.5% on Shape Net Part dataset. Results of extensive experiments show that our WECNN outperforms other classification and segmentation approaches on challenging benchmarks.
基金supported by the National Natural Science Foundation of China[grant number 41971307]Fundamental Research Funds for the Central Universities[grant number 2042022kf1200,2042023kf0217]+1 种基金Wuhan University Specific Fund for Major School-level Internationalization InitiativesLIESMARS Special Research Funding.
文摘Precise classification of Light Detection and Ranging(LiDAR)point cloud is a fundamental process in various applications,such as land cover mapping,forestry management,and autonomous driving.Due to the lack of spectral information,the existing research on single wavelength LiDAR classification is limited.Spectral information from images could address this limitation,but data fusion suffers from varying illumination conditions and the registration problem.A novel multispectral LiDAR successfully obtains spatial and spectral information as a brand-new data type,namely,multispectral point cloud,thereby improving classification performance.However,spatial and spectral information of multispectral LiDAR has been processed separately in previous studies,thereby possibly limiting the classification performance of multispectral LiDAR.To explore the potential of this new data type,the current spatial-spectral classification framework for multispectral LiDAR that includes four steps:(1)neighborhood selection,(2)feature extraction and selection,(3)classification,and(4)label smoothing.Three novel highlights were proposed in this spatial-spectral classification framework.(1)We improved the popular eigen entropy-based neighborhood selection by spectral angle match to extract a more precise neighborhood.(2)We evaluated the importance of geometric and spectral features to compare their contributions and selected the most important features to reduce feature redundancy.(3)We conducted spatial label smoothing by a conditional random field,accounting for the spatial and spectral information of the neighborhood points.The proposed method demonstrated by a multispectral LiDAR with three channels:466 nm(blue),527 nm(green),and 628 nm(red).Experimental results demonstrate the effectiveness of the proposed spatial-spectral classification framework.Moreover,this research takes advantages of the complementation of spatial and spectral information,which could benefit more precise neighborhood selection,more effective features,and satisfactory refinement of classification result.Finally,this study could serve as an inspiration for future efficient spatial-spectral process for multispectral point cloud.
基金supported by the National Key R&D Program of China(No.2020YFB1708002)the National Natural Science Foundation of China(Grant Nos.62371009 and 61971008).
文摘Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization.In the context of point cloud data,mixing two samples to generate new training examples has proven to be effective.In this paper,we propose a novel and effective approach called Farthest Point Sampling Mix(FPSMix)for augmenting point cloud data.Our method leverages farthest point sampling,a technique used in point cloud processing,to generate new samples by mixing points from two original point clouds.Another key innovation of our approach is the introduction of a significance-based loss function,which assigns weights to the soft labels of the mixed samples based on the classification loss of each part of the new sample that is separated from the two original point clouds.This way,our method takes into account the importance of different parts of the mixed sample during the training process,allowing the model to learn better global features.Experimental results demonstrate that our FPSMix,combined with the significance-based loss function,improves the classification accuracy of point cloud models and achieves comparable performance with state-of-the-art data augmentation methods.Moreover,our approach is complementary to techniques that focus on local features,and their combined use further enhances the classification accuracy of the baseline model.
文摘Nowadays,there has been a growing trend in the field of high-energy physics(HEP),in both its experimental and phenomenological studies,to incorporate machine learning(ML)and its specialized branch,deep learning(DL).This review paper provides a thorough illustration of these applications using different ML and DL approaches.The first part of the paper examines the basics of various particle physics types and establishes guidelines for assessing particle physics alongside the available learning models.Next,a detailed classification is provided for representing Jets that are reconstructed in high-energy collisions,mainly in proton-proton collisions at well-defined beam energies.This section covers various datasets,preprocessing techniques,and feature extraction and selection methods.The presented techniques can be applied to future hadron–hadron colliders(HHC),such as the high-luminosity LHC(HL-LHC)and the future circular collider–hadron–hadron(FCC-hh).The authors then explore several AI techniques analyses designed specifically for both image and point-cloud(PC)data in HEP.Additionally,a closer look is taken at the classification associated with Jet tagging in hadron collisions.In this review,various state-of-the-art(SOTA)techniques in ML and DL are examined,with a focus on their implications for HEP demands.More precisely,this discussion addresses various applications in extensive detail,such as Jet tagging,Jet tracking,and particle classification.The review concludes with an analysis of the current state of HEP using DL methodologies.It highlights the challenges and potential areas for future research,which are illustrated for each application.