Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca...Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.展开更多
Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected ...Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.展开更多
Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of disco...Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.展开更多
To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-sca...To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compress...LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compressed data formats(e.g.LAZ)have been proposed over the years.These formats are capable of transparently decompressing portions of the data,but they are not focused on solving general queries over the data.In contrast to that traditional approach,a new recent research line focuses on designing data structures that combine compression and indexation,allowing directly querying the compressed data.Compression is used to fit the data structure in main memory all the time,thus getting rid of disk accesses,and indexation is used to query the compressed data as fast as querying the uncompressed data.In this paper,we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values.Our method is able to run range queries and attribute queries up to 100 times faster than previous methods.展开更多
Mapping individual tree quality parameters from high-density LiDAR point clouds is an important step towards improved forest inventories.We present a novel machine learning-based workflow that uses individual tree poi...Mapping individual tree quality parameters from high-density LiDAR point clouds is an important step towards improved forest inventories.We present a novel machine learning-based workflow that uses individual tree point clouds from drone laser scanning to predict wood quality indicators in standing trees.Unlike object reconstruction methods,our approach is based on simple metrics computed on vertical slices that summarize information on point distances,angles,and geometric attributes of the space between and around the points.Our models use these slice metrics as predictors and achieve high accuracy for predicting the diameter of the largest branch per log (DLBs) and stem diameter at different heights (DS) from survey-grade drone laser scans.We show that our models are also robust and accurate when tested on suboptimal versions of the data generated by reductions in the number of points or emulations of suboptimal single-tree segmentation scenarios.Our approach provides a simple,clear,and scalable solution that can be adapted to different situations both for research and more operational mapping.展开更多
Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of ...Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.展开更多
This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing proc...This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing procedure involves:(1)computing the point normal for HSV-rendering of point cloud;(2)automatically clustering the discontinuity sets;(3)extracting the set-based point clouds;(4)estimating of set-based mean orientation,spacing,and persistence;(5)identifying the block-forming arrays of discontinuity sets for the assessment of stability.The effectiveness of our rock structure processing has been proved by 3D distinct element back analysis.The results show that Sf M modelling and rock structure computing provides enormous cost,time and safety incentives in standard engineering practice.展开更多
This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by...This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.展开更多
A novel filtering algorithm for Lidar point clouds is presented, which can work well for complex cityscapes. Its main features are filtering based on raw Lidar point clouds without previous triangulation or rasterizat...A novel filtering algorithm for Lidar point clouds is presented, which can work well for complex cityscapes. Its main features are filtering based on raw Lidar point clouds without previous triangulation or rasterization. 3D topological relations among points are used to search edge points at the top of discontinuities, which are key information to recognize the bare earth points and building points. Experiment results show that the proposed algorithm can preserve discontinuous features in the bare earth and has no impact of size and shape of buildings.展开更多
The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity d...The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity degree/relation in multi-scale map spaces and then proposes a model for calculating the degree of spatial similarity between a point cloud at one scale and its gener- alized counterpart at another scale. After validation, the new model features 16 points with map scale change as the x coordinate and the degree of spatial similarity as the y coordinate. Finally, using an application for curve fitting, the model achieves an empirical formula that can calculate the degree of spatial similarity using map scale change as the sole independent variable, and vice versa. This formula can be used to automate algorithms for point feature generalization and to determine when to terminate them during the generalization.展开更多
Recent advances in 3D scanning technologies allow us to acquire accurate and dense 3D scan data of large-scale environments efficiently.Currently,there are various methods for acquiring largescale 3D scan data,such as...Recent advances in 3D scanning technologies allow us to acquire accurate and dense 3D scan data of large-scale environments efficiently.Currently,there are various methods for acquiring largescale 3D scan data,such as Mobile Laser Scanning(MLS),Airborne Laser Scanning,Terrestrial Laser Scanning,photogrammetry and Structure from Motion(SfM).Especially,MLS is useful to acquire dense point clouds of road and road-side objects,and SfM is a powerful technique to reconstruct meshes with textures from a set of digital images.In this research,a registration method of point clouds from vehicle-based MLS(MLS point cloud),and textured meshes from the SfM of aerial photographs(SfM mesh),is proposed for creating high-quality surface models of urban areas by combining them.In general,SfM mesh has non-scale information;therefore,scale,position,and orientation of the SfM mesh are adjusted in the registration process.In our method,first,2D feature points are extracted from both SfM mesh and MLS point cloud.This process consists of ground-and building-plane extraction by region growing,random sample consensus and least square method,vertical edge extraction by detecting intersections between the planes,and feature point extraction by intersection tests between the ground plane and the edges.Then,the corresponding feature points between the MLS point cloud and the SfM mesh are searched efficiently,using similarity invariant features and hashing.Next,the coordinate transformation is applied to the SfM mesh so that the ground planes and corresponding feature points are adjusted.Finally,scaling Iterative Closest Point algorithm is applied for accurate registration.Experimental results for three data-sets show that our method is effective for the registration of SfM mesh and MLS point cloud of urban areas including buildings.展开更多
In this paper, we present a robust subneighborhoods selection technique for feature detection on point clouds scattered over a piecewise smooth surface. The proposed method first identifies all potential features usin...In this paper, we present a robust subneighborhoods selection technique for feature detection on point clouds scattered over a piecewise smooth surface. The proposed method first identifies all potential features using covariance analysis of the local- neighborhoods. To further extract the accurate features from potential features, Gabriel triangles are created in local neighborhoods of each potential feature vertex. These triangles tightly attach to underlying surface and effectively reflect the local geometry struc- ture. Applying a shared nearest neighbor clustering algorithm on ~ 1 reconstructed normals of created triangle set, we classify the lo- cal neighborhoods of the potential feature vertex into multiple subneighborhoods. Each subneighborhood indicates a piecewise smooth surface. The final feature vertex is identified by checking whether it is locating on the intersection of the multiple surfaces. An advantage of this framework is that it is not only robust to noise, but also insensitive to the size of selected neighborhoods. Ex- perimental results on a variety of models are used to illustrate the effectiveness and robustness of our method.展开更多
The landscape pattern metrics can quantitatively describe the characteristics of landscape pattern and are widely used in various fields of landscape ecology.Due to the lack of vertical information,2D landscape metric...The landscape pattern metrics can quantitatively describe the characteristics of landscape pattern and are widely used in various fields of landscape ecology.Due to the lack of vertical information,2D landscape metrics cannot delineate the vertical characteristics of landscape pattern.Based on the point clouds,a high-resolution voxel model and several voxel-based 3D landscape metrics were constructed in this study and 3D metrics calculation results were compared with that of 2D metrics.The results showed that certain quantifying difference exists between 2D and 3D landscape metrics.For landscapes with different components and spatial configurations,significant difference was disclosed between 2D and 3D landscape metrics.3D metrics can better reflect the real spatial structure characteristics of the landscape than 2D metrics.展开更多
Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when sema...Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when semantic façades are reconstructed from point cloud data,uneven point density and noise make it difficult to accurately determine the façade structure.When inves-tigating façade layouts,Gestalt principles can be applied to cluster visually similar floors and façade elements,allowing for a more intuitive interpretation of façade structures.We propose a novel model for describing façade structures,namely the layout graph model,which involves a compound graph with two structure levels.In the proposed model,similar façade elements such as windows are first grouped into clusters.A down-layout graph is then formed using this cluster as a node and by combining intra-and inter-cluster spacings as the edges.Second,a top-layout graph is formed by clustering similar floors.By extracting relevant parameters from this model,we transform semantic façade reconstruction to an optimization strategy using simulated annealing coupled with Gibbs sampling.Multiple façade point cloud data with different features were selected from three datasets to verify the effectiveness of this method.The experimental results show that the proposed method achieves an average accuracy of 86.35%.Owing to its flexibility,the proposed layout graph model can deal with different types of façades and qualities of point cloud data,enabling a more robust and accurate reconstruc-tion of façade models.展开更多
Forest is one of the most challenging environments to be recorded in a three-dimensional(3D)digitized geometrical representation,because of the size and the complexity of the environment and the data-acquisition const...Forest is one of the most challenging environments to be recorded in a three-dimensional(3D)digitized geometrical representation,because of the size and the complexity of the environment and the data-acquisition constraints brought by on-site conditions.Previous studies have indicated that the data-acquisition pattern can have more influence on the registration results than other factors.In practice,the ideal short-baseline observations,i.e.,the dense collection mode,is rarely feasible,considering the low accessibility in forest environments and the commonly limited labor and time resources.The wide-baseline observations that cover a forest site using a few folds less observations than short-baseline observations,are therefore more preferable and commonly applied.Nevertheless,the wide-baseline approach is more challenging for data registration since it typically lacks the required sufficient overlaps between datasets.Until now,a robust automated registration solution that is independent of special hardware requirements has still been missing.That is,the registration accuracy is still far from the required level,and the information extractable from the merged point cloud using automated registration could not match that from the merged point cloud using manual registration.This paper proposes a discrete overlap search(DOS)method to find correspondences in the point clouds to solve the low-overlap problem in the wide-baseline point clouds.The proposed automatic method uses potential correspondences from both original data and selected feature points to reconstruct rough observation geometries without external knowledge and to retrieve precise registration parameters at data-level.An extensive experiment was carried out with 24 forest datasets of different conditions categorized in three difficulty levels.The performance of the proposed method was evaluated using various accuracy criteria,as well as based on data acquired from different hardware,platforms,viewing perspectives,and at different points of time.The proposed method achieved a 3D registration accuracy at a 0.50-cm level in all difficulty categories using static terrestrial acquisitions.In the terrestrial-aerial registration,data sets were collected from different sensors and at different points of time with scene changes,and a registration accuracy at the raw data geometric accuracy level was achieved.These results represent the highest automated registration accuracy and the strictest evaluation so far.The proposed method is applicable in multiple scenarios,such as 1)the global positioning of individual under-canopy observations,which is one of the main challenges in applying terrestrial observations lacking a global context,2)the fusion of point clouds acquired from terrestrial and aerial perspectives,which is required in order to achieve a complete forest observation,3)mobile mapping using a new stop-and-go approach,which solves the problems of lacking mobility and slow data collection in static terrestrial measurements as well as the data-quality issue in the continuous mobile approach.Furthermore,this work proposes a new error estimate that units all parameter-level errors into a single quantity and compensates for the downsides of the widely used parameter-and object-level error estimates;it also proposes a new deterministic point sets registration method as an alternative to the popular sampling methods.展开更多
For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by th...For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.展开更多
Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for s...Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.展开更多
Mining industrial areas with anthropogenic engineering structures are one of the most distinctive features of the real world.3D models of the real world have been increasingly popular with numerous applications,such a...Mining industrial areas with anthropogenic engineering structures are one of the most distinctive features of the real world.3D models of the real world have been increasingly popular with numerous applications,such as digital twins and smart factory management.In this study,3D models of mining engineering structures were built based on the CityGML standard.For collecting spatial data,the two most popular geospatial technologies,namely UAV-SfM and TLS were employed.The accuracy of the UAV survey was at the centimeter level,and it satisfied the absolute positional accuracy requirement of creat-ing all levels of detail(LoD)according to the CityGML standard.Therefore,the UAV-SfM point cloud dataset was used to build LoD 2 models.In addition,the comparison between the UAV-SfM and TLS sub-clouds of facades and roofs indicates that the UAV-SfM and TLS point clouds of these objects are highly consistent,therefore,point clouds with a higher level of detail and accuracy provided by the integration of UAV-SfM and TLS were used to build LoD 3 models.The resulting 3D CityGML models include 39 buildings at LoD 2,and two mine shafts with hoistrooms,headframes,and sheave wheels at LoD3.展开更多
基金supported by the National Key R&D Program of China(No.2023YFC3081200)the National Natural Science Foundation of China(No.42077264)the Scientific Research Project of PowerChina Huadong Engineering Corporation Limited(HDEC-2022-0301).
文摘Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.
基金supported by the National Natural Science Foundation of China(Grant No.42407232)the Sichuan Science and Technology Program(Grant No.2024NSFSC0826).
文摘Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.
基金supported by the National Key Research and Development Program of China(Grant Nos.2023YFC2907400 and 2021YFC2900500)the National Natural Science Foundation of China(Grant No.52074020).
文摘Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.
文摘To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
文摘LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compressed data formats(e.g.LAZ)have been proposed over the years.These formats are capable of transparently decompressing portions of the data,but they are not focused on solving general queries over the data.In contrast to that traditional approach,a new recent research line focuses on designing data structures that combine compression and indexation,allowing directly querying the compressed data.Compression is used to fit the data structure in main memory all the time,thus getting rid of disk accesses,and indexation is used to query the compressed data as fast as querying the uncompressed data.In this paper,we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values.Our method is able to run range queries and attribute queries up to 100 times faster than previous methods.
基金the Center for Research-based Innovation SmartForest:Bringing Industry 4.0 to the Norwegian forest sector (NFR SFI project no.309671,smartforest.no)。
文摘Mapping individual tree quality parameters from high-density LiDAR point clouds is an important step towards improved forest inventories.We present a novel machine learning-based workflow that uses individual tree point clouds from drone laser scanning to predict wood quality indicators in standing trees.Unlike object reconstruction methods,our approach is based on simple metrics computed on vertical slices that summarize information on point distances,angles,and geometric attributes of the space between and around the points.Our models use these slice metrics as predictors and achieve high accuracy for predicting the diameter of the largest branch per log (DLBs) and stem diameter at different heights (DS) from survey-grade drone laser scans.We show that our models are also robust and accurate when tested on suboptimal versions of the data generated by reductions in the number of points or emulations of suboptimal single-tree segmentation scenarios.Our approach provides a simple,clear,and scalable solution that can be adapted to different situations both for research and more operational mapping.
基金supported by the projects found by the Jiangsu Transportation Science and Technology Project under Grants 2020Y191(1)Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grants KYCX23_0294。
文摘Increasing development of accurate and efficient road three-dimensional(3D)modeling presents great opportunities to improve the data exchange and integration of building information modeling(BIM)models.3D modeling of road scenes is crucial for reference in asset management,construction,and maintenance.Light detection and ranging(Li DAR)technology is increasingly employed to generate high-quality point clouds for road inventory.In this paper,we specifically investigate the use of Li DAR data for road 3D modeling.The purpose of this review is to provide references about the existing work on the road 3D modeling based on Li DAR point clouds,critically discuss them,and provide challenges for further study.Besides,we introduce modeling standards for roads and discuss the components,types,and distinctions of various Li DAR measurement systems.Then,we review state-of-the-art methods and provide a detailed examination of road segmentation and feature extraction.Furthermore,we systematically introduce point cloud-based 3D modeling methods,namely,parametric modeling and surface reconstruction.Parameters and rules are used to define model components based on geometric and non-geometric information,whereas surface modeling is conducted through individual faces within its geometry.Finally,we discuss and summarize future research directions in this field.This review can assist researchers in enhancing existing approaches and developing new techniques for road modeling based on Li DAR point clouds.
基金supported by the National Innovation Research Group Science Fund(No.41521002)the National Key Research and Development Program of China(No.2018YFC1505202)。
文摘This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing procedure involves:(1)computing the point normal for HSV-rendering of point cloud;(2)automatically clustering the discontinuity sets;(3)extracting the set-based point clouds;(4)estimating of set-based mean orientation,spacing,and persistence;(5)identifying the block-forming arrays of discontinuity sets for the assessment of stability.The effectiveness of our rock structure processing has been proved by 3D distinct element back analysis.The results show that Sf M modelling and rock structure computing provides enormous cost,time and safety incentives in standard engineering practice.
基金supported by the Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China(Grant No.4182780021)Emeishan-Hanyuan Highway ProgramTaihang Mountain Highway Program。
文摘This paper presents an automated method for discontinuity trace mapping using three-dimensional point clouds of rock mass surfaces.Specifically,the method consists of five steps:(1)detection of trace feature points by normal tensor voting theory,(2)co ntraction of trace feature points,(3)connection of trace feature points,(4)linearization of trace segments,and(5)connection of trace segments.A sensitivity analysis was then conducted to identify the optimal parameters of the proposed method.Three field cases,a natural rock mass outcrop and two excavated rock tunnel surfaces,were analyzed using the proposed method to evaluate its validity and efficiency.The results show that the proposed method is more efficient and accurate than the traditional trace mapping method,and the efficiency enhancement is more robust as the number of feature points increases.
基金the National 863 Program of China (No.SQ2006AA12Z108506)
文摘A novel filtering algorithm for Lidar point clouds is presented, which can work well for complex cityscapes. Its main features are filtering based on raw Lidar point clouds without previous triangulation or rasterization. 3D topological relations among points are used to search edge points at the top of discontinuities, which are key information to recognize the bare earth points and building points. Experiment results show that the proposed algorithm can preserve discontinuous features in the bare earth and has no impact of size and shape of buildings.
基金funded by the Natural Science Foundation Committee,China(41364001,41371435)
文摘The degree of spatial similarity plays an important role in map generalization, yet there has been no quantitative research into it. To fill this gap, this study first defines map scale change and spatial similarity degree/relation in multi-scale map spaces and then proposes a model for calculating the degree of spatial similarity between a point cloud at one scale and its gener- alized counterpart at another scale. After validation, the new model features 16 points with map scale change as the x coordinate and the degree of spatial similarity as the y coordinate. Finally, using an application for curve fitting, the model achieves an empirical formula that can calculate the degree of spatial similarity using map scale change as the sole independent variable, and vice versa. This formula can be used to automate algorithms for point feature generalization and to determine when to terminate them during the generalization.
基金This work was partially supported by JSPS KAKENHI[grant number 26420073].
文摘Recent advances in 3D scanning technologies allow us to acquire accurate and dense 3D scan data of large-scale environments efficiently.Currently,there are various methods for acquiring largescale 3D scan data,such as Mobile Laser Scanning(MLS),Airborne Laser Scanning,Terrestrial Laser Scanning,photogrammetry and Structure from Motion(SfM).Especially,MLS is useful to acquire dense point clouds of road and road-side objects,and SfM is a powerful technique to reconstruct meshes with textures from a set of digital images.In this research,a registration method of point clouds from vehicle-based MLS(MLS point cloud),and textured meshes from the SfM of aerial photographs(SfM mesh),is proposed for creating high-quality surface models of urban areas by combining them.In general,SfM mesh has non-scale information;therefore,scale,position,and orientation of the SfM mesh are adjusted in the registration process.In our method,first,2D feature points are extracted from both SfM mesh and MLS point cloud.This process consists of ground-and building-plane extraction by region growing,random sample consensus and least square method,vertical edge extraction by detecting intersections between the planes,and feature point extraction by intersection tests between the ground plane and the edges.Then,the corresponding feature points between the MLS point cloud and the SfM mesh are searched efficiently,using similarity invariant features and hashing.Next,the coordinate transformation is applied to the SfM mesh so that the ground planes and corresponding feature points are adjusted.Finally,scaling Iterative Closest Point algorithm is applied for accurate registration.Experimental results for three data-sets show that our method is effective for the registration of SfM mesh and MLS point cloud of urban areas including buildings.
基金Supported by National Natural Science Foundation of China(No.u0935004,61173102)the Fundamental Research Funds for the Central Unibersities(DUT11SX08)
文摘In this paper, we present a robust subneighborhoods selection technique for feature detection on point clouds scattered over a piecewise smooth surface. The proposed method first identifies all potential features using covariance analysis of the local- neighborhoods. To further extract the accurate features from potential features, Gabriel triangles are created in local neighborhoods of each potential feature vertex. These triangles tightly attach to underlying surface and effectively reflect the local geometry struc- ture. Applying a shared nearest neighbor clustering algorithm on ~ 1 reconstructed normals of created triangle set, we classify the lo- cal neighborhoods of the potential feature vertex into multiple subneighborhoods. Each subneighborhood indicates a piecewise smooth surface. The final feature vertex is identified by checking whether it is locating on the intersection of the multiple surfaces. An advantage of this framework is that it is not only robust to noise, but also insensitive to the size of selected neighborhoods. Ex- perimental results on a variety of models are used to illustrate the effectiveness and robustness of our method.
文摘The landscape pattern metrics can quantitatively describe the characteristics of landscape pattern and are widely used in various fields of landscape ecology.Due to the lack of vertical information,2D landscape metrics cannot delineate the vertical characteristics of landscape pattern.Based on the point clouds,a high-resolution voxel model and several voxel-based 3D landscape metrics were constructed in this study and 3D metrics calculation results were compared with that of 2D metrics.The results showed that certain quantifying difference exists between 2D and 3D landscape metrics.For landscapes with different components and spatial configurations,significant difference was disclosed between 2D and 3D landscape metrics.3D metrics can better reflect the real spatial structure characteristics of the landscape than 2D metrics.
基金This work is supported by the National Natural Science Foundation of China[grant number 41771484].
文摘Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when semantic façades are reconstructed from point cloud data,uneven point density and noise make it difficult to accurately determine the façade structure.When inves-tigating façade layouts,Gestalt principles can be applied to cluster visually similar floors and façade elements,allowing for a more intuitive interpretation of façade structures.We propose a novel model for describing façade structures,namely the layout graph model,which involves a compound graph with two structure levels.In the proposed model,similar façade elements such as windows are first grouped into clusters.A down-layout graph is then formed using this cluster as a node and by combining intra-and inter-cluster spacings as the edges.Second,a top-layout graph is formed by clustering similar floors.By extracting relevant parameters from this model,we transform semantic façade reconstruction to an optimization strategy using simulated annealing coupled with Gibbs sampling.Multiple façade point cloud data with different features were selected from three datasets to verify the effectiveness of this method.The experimental results show that the proposed method achieves an average accuracy of 86.35%.Owing to its flexibility,the proposed layout graph model can deal with different types of façades and qualities of point cloud data,enabling a more robust and accurate reconstruc-tion of façade models.
基金financial support from the National Natural Science Foundation of China(Grant Nos.32171789,32211530031)Wuhan University(No.WHUZZJJ202220)Academy of Finland(Nos.334060,334829,331708,344755,337656,334830,293389/314312,334830,319011)。
文摘Forest is one of the most challenging environments to be recorded in a three-dimensional(3D)digitized geometrical representation,because of the size and the complexity of the environment and the data-acquisition constraints brought by on-site conditions.Previous studies have indicated that the data-acquisition pattern can have more influence on the registration results than other factors.In practice,the ideal short-baseline observations,i.e.,the dense collection mode,is rarely feasible,considering the low accessibility in forest environments and the commonly limited labor and time resources.The wide-baseline observations that cover a forest site using a few folds less observations than short-baseline observations,are therefore more preferable and commonly applied.Nevertheless,the wide-baseline approach is more challenging for data registration since it typically lacks the required sufficient overlaps between datasets.Until now,a robust automated registration solution that is independent of special hardware requirements has still been missing.That is,the registration accuracy is still far from the required level,and the information extractable from the merged point cloud using automated registration could not match that from the merged point cloud using manual registration.This paper proposes a discrete overlap search(DOS)method to find correspondences in the point clouds to solve the low-overlap problem in the wide-baseline point clouds.The proposed automatic method uses potential correspondences from both original data and selected feature points to reconstruct rough observation geometries without external knowledge and to retrieve precise registration parameters at data-level.An extensive experiment was carried out with 24 forest datasets of different conditions categorized in three difficulty levels.The performance of the proposed method was evaluated using various accuracy criteria,as well as based on data acquired from different hardware,platforms,viewing perspectives,and at different points of time.The proposed method achieved a 3D registration accuracy at a 0.50-cm level in all difficulty categories using static terrestrial acquisitions.In the terrestrial-aerial registration,data sets were collected from different sensors and at different points of time with scene changes,and a registration accuracy at the raw data geometric accuracy level was achieved.These results represent the highest automated registration accuracy and the strictest evaluation so far.The proposed method is applicable in multiple scenarios,such as 1)the global positioning of individual under-canopy observations,which is one of the main challenges in applying terrestrial observations lacking a global context,2)the fusion of point clouds acquired from terrestrial and aerial perspectives,which is required in order to achieve a complete forest observation,3)mobile mapping using a new stop-and-go approach,which solves the problems of lacking mobility and slow data collection in static terrestrial measurements as well as the data-quality issue in the continuous mobile approach.Furthermore,this work proposes a new error estimate that units all parameter-level errors into a single quantity and compensates for the downsides of the widely used parameter-and object-level error estimates;it also proposes a new deterministic point sets registration method as an alternative to the popular sampling methods.
文摘For a vision measurement system consisted of laser-CCD scanning sensors, an algorithm is proposed to extract and recognize the target object contour. Firstly, the two-dimensional(2D) point cloud that is output by the integrated laser sensor is transformed into a binary image. Secondly, the potential target object contours are segmented and extracted based on the connected domain labeling and adaptive corner detection. Then, the target object contour is recognized by improved Hu invariant moments and BP neural network classifier. Finally, we extract the point data of the target object contour through the reverse transformation from a binary image to a 2D point cloud. The experimental results show that the average recognition rate is 98.5% and the average recognition time is 0.18 s per frame. This algorithm realizes the real-time tracking of the target object in the complex background and the condition of multi-moving objects.
基金The work was supported by the International Foundation for Science(Grant No:I-1-D-60661).
文摘Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.
基金his research was funded by Hanoi university of Mining and Geology,Grant Number T22-47.
文摘Mining industrial areas with anthropogenic engineering structures are one of the most distinctive features of the real world.3D models of the real world have been increasingly popular with numerous applications,such as digital twins and smart factory management.In this study,3D models of mining engineering structures were built based on the CityGML standard.For collecting spatial data,the two most popular geospatial technologies,namely UAV-SfM and TLS were employed.The accuracy of the UAV survey was at the centimeter level,and it satisfied the absolute positional accuracy requirement of creat-ing all levels of detail(LoD)according to the CityGML standard.Therefore,the UAV-SfM point cloud dataset was used to build LoD 2 models.In addition,the comparison between the UAV-SfM and TLS sub-clouds of facades and roofs indicates that the UAV-SfM and TLS point clouds of these objects are highly consistent,therefore,point clouds with a higher level of detail and accuracy provided by the integration of UAV-SfM and TLS were used to build LoD 3 models.The resulting 3D CityGML models include 39 buildings at LoD 2,and two mine shafts with hoistrooms,headframes,and sheave wheels at LoD3.