3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with m...3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.展开更多
To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determ...To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determining the position of the grasping information.These results can then be used to guide the visually impaired or to execute grasping tasks with a robotic arm.In this paper,we collected,annotated,and published the benchmark TQUGraspingObject dataset for testing,validation,and evaluation of deep learning(DL)models for detecting,recognizing,and localizing grasping objects in 2D and 3D space,especially 3D point cloud data.Our dataset is collected in a shared room,with common everyday objects placed on the tabletop in jumbled positions by Intel RealSense D435(IR-D435).This dataset includes more than 63k RGB-D pairs and related data such as normalized 3D object point cloud,3D object point cloud segmented,coordinate system normalizationmatrix,3D object point cloud normalized,and hand pose for grasping each object.At the same time,we also conducted experiments on fourDL networks with the best performance:SSD-MobileNetV3,ResNet50-Transformer,ResNet101-Transformer,and YOLOv12.The results present that YOLOv12 has the most suitable results in detecting and recognizing objects in images.All data,annotations,toolkit,source code,point cloud data,and results are publicly available on our project website:https://github.com/HuaTThanhIT2327Tqu/datasetv2.展开更多
Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological sur...Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological surveys remains limited.Notable limitations of current research include the scarcity of validation using simple geometric shapes for discontinuity extraction methods,and the lack of studies that target both planar and linear discontinuity.To address these gaps,this study proposes a workflow for identifying discontinuity planes and traces in rock outcrops from photogrammetric 3D modeling,employing the Compass and Facets plugins in the open-source CloudCompare software.Prior to field application,the efficacy of the extraction methods was first evaluated using experimental datasets of a cube and an isosceles triangular prism generated under laboratory-controlled conditions.This validation demonstrated exceptional accuracy,with the dip and dip direction(DDD)of extracted structures consistently within±2°of the actual values.Following this rigorous laboratory validation,this methodology was applied to a more complex natural rock outcrop(Miocene–Pliocene deposits in Japan),demonstrating its applicability in realistic geological settings for identifying structures.The results showed that the dip and dip direction trends of the extracted bedding planes and faults were consistent with field measurements,achieving a time reduction of approximately 40%compared to traditional methods.In conclusion,through strictly controlled initial verification and subsequent successful application to a complex natural setting,this study confirmed that the proposed workflow can effectively and efficiently extract discontinuous geological structures from point clouds.展开更多
Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to priva...Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.展开更多
The virtual preassembly of super-high steel bridge towers faces a challenge in the efficient and precise extraction of complex cross-sectional features.Factors such as fabrication errors,gravity-induced deformations,a...The virtual preassembly of super-high steel bridge towers faces a challenge in the efficient and precise extraction of complex cross-sectional features.Factors such as fabrication errors,gravity-induced deformations,and temperature fluctuations can compromise the accuracy of contour extraction.To address these limitations,an improved Alpha-shape-based point cloud contour extraction method is proposed.The proposed approach uses a hierarchical strategy to process three-dimensional laser scanning point clouds.The processed data are then subjected to curvatureadaptive voxel filtering to reduce acquisition noise.In addition,an enhanced iterative closest point(ICP)variant with correspondence validation accurately aligns the discrete point cloud segments.The proposed curvature-responsive Alpha-shape framework enables multiscale contour delineation through topology-adaptive threshold modulation,which resolves boundary ambiguities in geometrically complex cross-sections.The method was experimentally validated using field-acquired measurement datasets from the Zhangjinggao Yangtze River Bridge tower segments,confirming its capability to reconstruct noncanonical cross-sectional geometries.Three contour extraction methods,including Poisson reconstruction,the conventional Alpha-shape algorithm,and random sample consensus with ICP(RANSAC-ICP),were compared to evaluate the performance of the proposed Alpha-shape algorithm.The results demonstrate that the proposed method achieves superior contour extraction accuracy and data reduction efficiency,highlighting its effectiveness in contour extraction tasks.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)...Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.展开更多
In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCI...In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
针对自动驾驶场景下,近处干扰点云误检率高、远处稀疏点云漏检率高的问题,提出了一种基于改进PointPillars的自动驾驶障碍物点云检测算法.首先,通过聚合模块和共享多层感知机(shared multi-layer perceptron,MLP)对柱体内点云进行特征编...针对自动驾驶场景下,近处干扰点云误检率高、远处稀疏点云漏检率高的问题,提出了一种基于改进PointPillars的自动驾驶障碍物点云检测算法.首先,通过聚合模块和共享多层感知机(shared multi-layer perceptron,MLP)对柱体内点云进行特征编码,采用最大池化与平均池化叠加的方法将点云的显著特征与细节特征映射为柱体特征;其次,针对算法对伪图特征关注与利用不充分的问题,引入坐标注意力(coordinate attention,CA)机制和残差连接的伪图特征提取模块(attention and residual second block,ARSB),将深层与浅层特征图进行融合,优化算法梯度,增强算法对有效目标的关注度.试验结果表明:改进算法对全局点云检测精度较高,平均精度优于PointPillars、稀疏到稠密3D目标检测器(STD)等点云目标检测算法,在汽车类别上的检测精度优势明显,检测速度较快,符合实时性要求.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is...In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.展开更多
This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for...This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for feature extraction from the PC.The effects of different neighborhood sizes(k=5,10,20,50,and 100)have been evaluated to assess their impact on classification performance.After that,17 geometric and spatial features were extracted from the PC.Next,ensemble methods,AdaBoost.M2,random forest,and decision tree,have been compared with Artificial Neural Networks to classify the main discontinuity sets.The McNemar test indicates that the classifiers are statistically significant.The random forest classifier consistently achieves the highest performance with an accuracy exceeding 95%when using a neighborhood size of k=100,while recall,F-score,and Cohen's Kappa also demonstrate high success.SHapley Additive exPlanations(SHAP),an Explainable AI technique,has been used to evaluate feature importance and improve the explainability of black-box machine learning models in the context of rock discontinuity classification.The analysis reveals that features such as normal vectors,verticality,and Z-values have the greatest influence on identifying main discontinuity sets,while linearity,planarity,and eigenvalues contribute less,making the model more transparent and easier to understand.After classification,individual discontinuity sets were detected using a revised DBSCAN from the main discontinuity sets.Finally,the orientation parameters of the plane fitted to each discontinuity were derived from the plane parameters obtained using the Random Sample Consensus(RANSAC).Two real-world datasets(obtained from SfM and LiDAR)and one synthetic dataset were used to validate the proposed method,which successfully identified rock discontinuities and their orientation parameters(dip angle/direction).展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some ...Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.展开更多
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud...With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have pr...Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have predominantly focused on landslides that occur on land.To this end,we aim to investigate ashore and underwater landslide data synchronously.This study proposes an optimized mosaicking method for ashore and underwater landslide data.This method fuses an airborne laser point cloud with multi-beam depth sounder images.Owing to their relatively high efficiency and large coverage area,airborne laser measurement systems are suitable for emergency investigations of landslides.Based on the airborne laser point cloud,the traversal of the point with the lowest elevation value in the point set can be used to perform rapid extraction of the crude channel boundaries.Further meticulous extraction of the channel boundaries is then implemented using the probability mean value optimization method.In addition,synthesis of the integrated ashore and underwater landslide data angle is realized using the spatial guide line between the channel boundaries and the underwater multibeam sonar images.A landslide located on the right bank of the middle reaches of the Yalong River is selected as a case study to demonstrate that the proposed method has higher precision thantraditional methods.The experimental results show that the mosaicking method in this study can meet the basic needs of landslide modeling and provide a basis for qualitative and quantitative analysis and stability prediction of landslides.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.52304139,52325403)the CCTEG Coal Mining Research Institute funding(Grant No.KCYJY-2024-MS-10).
文摘3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.
文摘To support the process of grasping objects on a tabletop for the blind or robotic arm,it is necessary to address fundamental computer vision tasks,such as detecting,recognizing,and locating objects in space,and determining the position of the grasping information.These results can then be used to guide the visually impaired or to execute grasping tasks with a robotic arm.In this paper,we collected,annotated,and published the benchmark TQUGraspingObject dataset for testing,validation,and evaluation of deep learning(DL)models for detecting,recognizing,and localizing grasping objects in 2D and 3D space,especially 3D point cloud data.Our dataset is collected in a shared room,with common everyday objects placed on the tabletop in jumbled positions by Intel RealSense D435(IR-D435).This dataset includes more than 63k RGB-D pairs and related data such as normalized 3D object point cloud,3D object point cloud segmented,coordinate system normalizationmatrix,3D object point cloud normalized,and hand pose for grasping each object.At the same time,we also conducted experiments on fourDL networks with the best performance:SSD-MobileNetV3,ResNet50-Transformer,ResNet101-Transformer,and YOLOv12.The results present that YOLOv12 has the most suitable results in detecting and recognizing objects in images.All data,annotations,toolkit,source code,point cloud data,and results are publicly available on our project website:https://github.com/HuaTThanhIT2327Tqu/datasetv2.
文摘Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological surveys remains limited.Notable limitations of current research include the scarcity of validation using simple geometric shapes for discontinuity extraction methods,and the lack of studies that target both planar and linear discontinuity.To address these gaps,this study proposes a workflow for identifying discontinuity planes and traces in rock outcrops from photogrammetric 3D modeling,employing the Compass and Facets plugins in the open-source CloudCompare software.Prior to field application,the efficacy of the extraction methods was first evaluated using experimental datasets of a cube and an isosceles triangular prism generated under laboratory-controlled conditions.This validation demonstrated exceptional accuracy,with the dip and dip direction(DDD)of extracted structures consistently within±2°of the actual values.Following this rigorous laboratory validation,this methodology was applied to a more complex natural rock outcrop(Miocene–Pliocene deposits in Japan),demonstrating its applicability in realistic geological settings for identifying structures.The results showed that the dip and dip direction trends of the extracted bedding planes and faults were consistent with field measurements,achieving a time reduction of approximately 40%compared to traditional methods.In conclusion,through strictly controlled initial verification and subsequent successful application to a complex natural setting,this study confirmed that the proposed workflow can effectively and efficiently extract discontinuous geological structures from point clouds.
基金supported in part by the National Key Research and Development Program of Chinaunder(Grant 2021YFB3101100)in part by the National Natural Science Foundation of Chinaunder(Grant 42461057),(Grant 62272123),and(Grant 42371470)+1 种基金in part by the Fundamental Research Program of Shanxi Province under(Grant 202303021212164)in part by the Postgraduate Education Innovation Program of Shanxi Province under(Grant 2024KY474).
文摘Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.
基金The National Natural Science Foundation of China(No.52338011)the Start-up Research Fund of Southeast University(No.RF1028624058)+1 种基金the Southeast University Interdisciplinary Research Program for Young Scholarsthe National Key Research and Development Program of China(No.2024YFC3014103).
文摘The virtual preassembly of super-high steel bridge towers faces a challenge in the efficient and precise extraction of complex cross-sectional features.Factors such as fabrication errors,gravity-induced deformations,and temperature fluctuations can compromise the accuracy of contour extraction.To address these limitations,an improved Alpha-shape-based point cloud contour extraction method is proposed.The proposed approach uses a hierarchical strategy to process three-dimensional laser scanning point clouds.The processed data are then subjected to curvatureadaptive voxel filtering to reduce acquisition noise.In addition,an enhanced iterative closest point(ICP)variant with correspondence validation accurately aligns the discrete point cloud segments.The proposed curvature-responsive Alpha-shape framework enables multiscale contour delineation through topology-adaptive threshold modulation,which resolves boundary ambiguities in geometrically complex cross-sections.The method was experimentally validated using field-acquired measurement datasets from the Zhangjinggao Yangtze River Bridge tower segments,confirming its capability to reconstruct noncanonical cross-sectional geometries.Three contour extraction methods,including Poisson reconstruction,the conventional Alpha-shape algorithm,and random sample consensus with ICP(RANSAC-ICP),were compared to evaluate the performance of the proposed Alpha-shape algorithm.The results demonstrate that the proposed method achieves superior contour extraction accuracy and data reduction efficiency,highlighting its effectiveness in contour extraction tasks.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
基金Guangxi Key Laboratory of Spatial Information and Geomatics(21-238-21-12)Guangxi Young and Middle-aged Teachers’Research Fundamental Ability Enhancement Project(2023KY1196).
文摘Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.
基金the National Natural Science Foundation of China(Nos.42176188,42176192)the Hainan Provincial Natural Science Foundation of China(No.421CXTD442)+2 种基金the Stable Supporting Fund of Acoustic Science and Technology Laboratory(No.JCKYS2024604SSJS007)the Fundamental Research Funds for the Central Universities(No.3072024CFJ0504)the Harbin Engineering University Doctoral Research and Innovation Fund(No.XK2050021034)。
文摘In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
文摘针对自动驾驶场景下,近处干扰点云误检率高、远处稀疏点云漏检率高的问题,提出了一种基于改进PointPillars的自动驾驶障碍物点云检测算法.首先,通过聚合模块和共享多层感知机(shared multi-layer perceptron,MLP)对柱体内点云进行特征编码,采用最大池化与平均池化叠加的方法将点云的显著特征与细节特征映射为柱体特征;其次,针对算法对伪图特征关注与利用不充分的问题,引入坐标注意力(coordinate attention,CA)机制和残差连接的伪图特征提取模块(attention and residual second block,ARSB),将深层与浅层特征图进行融合,优化算法梯度,增强算法对有效目标的关注度.试验结果表明:改进算法对全局点云检测精度较高,平均精度优于PointPillars、稀疏到稠密3D目标检测器(STD)等点云目标检测算法,在汽车类别上的检测精度优势明显,检测速度较快,符合实时性要求.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
基金This work was supported by National Nature Science Foundation of China(No.61811530281 and 61861136009)Guangdong Regional Joint Foundation(No.2019B1515120076)the Fundamental Research for the Central Universities.
文摘In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.
文摘This study presents a framework for the semi-automatic detection of rock discontinuities using a threedimensional(3D)point cloud(PC).The process begins by selecting an appropriate neighborhood size,a critical step for feature extraction from the PC.The effects of different neighborhood sizes(k=5,10,20,50,and 100)have been evaluated to assess their impact on classification performance.After that,17 geometric and spatial features were extracted from the PC.Next,ensemble methods,AdaBoost.M2,random forest,and decision tree,have been compared with Artificial Neural Networks to classify the main discontinuity sets.The McNemar test indicates that the classifiers are statistically significant.The random forest classifier consistently achieves the highest performance with an accuracy exceeding 95%when using a neighborhood size of k=100,while recall,F-score,and Cohen's Kappa also demonstrate high success.SHapley Additive exPlanations(SHAP),an Explainable AI technique,has been used to evaluate feature importance and improve the explainability of black-box machine learning models in the context of rock discontinuity classification.The analysis reveals that features such as normal vectors,verticality,and Z-values have the greatest influence on identifying main discontinuity sets,while linearity,planarity,and eigenvalues contribute less,making the model more transparent and easier to understand.After classification,individual discontinuity sets were detected using a revised DBSCAN from the main discontinuity sets.Finally,the orientation parameters of the plane fitted to each discontinuity were derived from the plane parameters obtained using the Random Sample Consensus(RANSAC).Two real-world datasets(obtained from SfM and LiDAR)and one synthetic dataset were used to validate the proposed method,which successfully identified rock discontinuities and their orientation parameters(dip angle/direction).
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported by NTNU Digital project[grant number 81771593].
文摘Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines.
基金supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00399401,Development of Quantum-Safe Infrastructure Migration and Quantum Security Verification Technologies).
文摘With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金supported in part by the National Key R&D Program of China(Grant no.2016YFC0401908)。
文摘Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have predominantly focused on landslides that occur on land.To this end,we aim to investigate ashore and underwater landslide data synchronously.This study proposes an optimized mosaicking method for ashore and underwater landslide data.This method fuses an airborne laser point cloud with multi-beam depth sounder images.Owing to their relatively high efficiency and large coverage area,airborne laser measurement systems are suitable for emergency investigations of landslides.Based on the airborne laser point cloud,the traversal of the point with the lowest elevation value in the point set can be used to perform rapid extraction of the crude channel boundaries.Further meticulous extraction of the channel boundaries is then implemented using the probability mean value optimization method.In addition,synthesis of the integrated ashore and underwater landslide data angle is realized using the spatial guide line between the channel boundaries and the underwater multibeam sonar images.A landslide located on the right bank of the middle reaches of the Yalong River is selected as a case study to demonstrate that the proposed method has higher precision thantraditional methods.The experimental results show that the mosaicking method in this study can meet the basic needs of landslide modeling and provide a basis for qualitative and quantitative analysis and stability prediction of landslides.