Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)...Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCI...In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is...In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud...With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have pr...Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have predominantly focused on landslides that occur on land.To this end,we aim to investigate ashore and underwater landslide data synchronously.This study proposes an optimized mosaicking method for ashore and underwater landslide data.This method fuses an airborne laser point cloud with multi-beam depth sounder images.Owing to their relatively high efficiency and large coverage area,airborne laser measurement systems are suitable for emergency investigations of landslides.Based on the airborne laser point cloud,the traversal of the point with the lowest elevation value in the point set can be used to perform rapid extraction of the crude channel boundaries.Further meticulous extraction of the channel boundaries is then implemented using the probability mean value optimization method.In addition,synthesis of the integrated ashore and underwater landslide data angle is realized using the spatial guide line between the channel boundaries and the underwater multibeam sonar images.A landslide located on the right bank of the middle reaches of the Yalong River is selected as a case study to demonstrate that the proposed method has higher precision thantraditional methods.The experimental results show that the mosaicking method in this study can meet the basic needs of landslide modeling and provide a basis for qualitative and quantitative analysis and stability prediction of landslides.展开更多
BIM(building information modelling)has gained wider acceptance in the A/E/C(architecture/engineering/construction)industry in the US and internationally.This paper presents current industry approaches of implementing ...BIM(building information modelling)has gained wider acceptance in the A/E/C(architecture/engineering/construction)industry in the US and internationally.This paper presents current industry approaches of implementing 3D point cloud data in BIM and VDC(virtual design and construction)applications during various stages of a project life cycle and the challenges associated with processing the huge amount of 3D point cloud data.Conversion from discrete 3D point cloud raster data to geometric/vector BIM data remains to be a labor-intensive process.The needs for intelligent geometric feature detection/reconstruction algorithms for automated point cloud processing and issues related to data management are discussed.This paper also presents an innovative approach for integrating 3D point cloud data with BIM to efficiently augment built environment design,construction and management.展开更多
With the continuous advancement of the tiered diagnosis and treatment system,the medical consortium model has gained increasing attention as an important approach to promoting the vertical integration of healthcare re...With the continuous advancement of the tiered diagnosis and treatment system,the medical consortium model has gained increasing attention as an important approach to promoting the vertical integration of healthcare resources.Within this context,laboratory data,as a key component of healthcare information systems,urgently requires efficient sharing and intelligent analysis.This paper designs and constructs an intelligent early warning system for laboratory data based on a cloud platform tailored to the medical consortium model.Through standardized data formats and unified access interfaces,the system enables the integration and cleaning of laboratory data across multiple healthcare institutions.By combining medical rule sets with machine learning models,the system achieves graded alerts and rapid responses to abnormal key indicators and potential outbreaks of infectious diseases.Practical deployment results demonstrate that the system significantly improves the utilization efficiency of laboratory data,strengthens public health event monitoring,and optimizes inter-institutional collaboration.The paper also discusses challenges encountered during system implementation,such as inconsistent data standards,security and compliance concerns,and model interpretability,and proposes corresponding optimization strategies.These findings provide a reference for the broader application of intelligent medical early warning systems.展开更多
Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more compl...Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more complex.Secondly,the quality impairment generally involves both geometric attributes and color properties,where the measurement of the geometric distortion becomes more complex.We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System(HVS)and the intrinsic characteristics of the point cloud.The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling,and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors(KNN)clustering.For geometric distortion,the Point Feature Histogram(PFH)is extracted as the feature descriptor,and the Earth Mover’s Distance(EMD)between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement.For color distortion,the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement.Finally,the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement.The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time.Meanwhile,the experimental results also demonstrate the robustness of the proposed method across various distortion types.The source codes are available at https://github.com/llsurreal919/Point Cloud Quality Assessment.展开更多
Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of disco...Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.展开更多
The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,direc...The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.展开更多
Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametr...Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.展开更多
Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing di...Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing distinct chain ends.Despite these structural differences,both types of polymers exhibit locally random-walk-like conformations,making it challenging to detect subtle spatial variations using conventional methods.In this study,we address this challenge by integrating molecular dynamics simulations with point cloud neural networks to analyze the spatial conformations of cyclic and linear polymers.By utilizing the Dynamic Graph CNN(DGCNN)model,we classify polymer conformations based on the 3D coordinates of monomers,capturing local and global topological differences without considering chain connectivity sequentiality.Our findings reveal that the optimal local structural feature unit size scales linearly with molecular weight,aligning with theoretical predictions.Additionally,interpretability techniques such as Grad-CAM and SHAP identify significant conformational differences:cyclic polymers tend to form prolate ellipsoid shapes with pronounced elongation along the major axis,while linear polymers show elongated ends with more spherical centers.These findings reveal subtle yet critical differences in local conformations between cyclic and linear polymers that were previously difficult to discern,providing deeper insights into polymer structure-property relationships and offering guidance for future polymer science advancements.展开更多
Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the nove...Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the novel classes.Due to imbalanced training data,existing FS3D methods based on fully supervised learning can lead to overfitting toward base classes,which impairs the network’s ability to generalize knowledge learned from base classes to novel classes and also prevents the network from extracting distinctive foreground and background representations for novel class objects.To address these issues,this thesis proposes a category-agnostic contrastive learning approach,enhancing the generalization and identification abilities for almost unseen categories through the construction of pseudo-labels and positive-negative sample pairs unrelated to specific classes.Firstly,this thesis designs a proposal-wise context contrastive module(CCM).By reducing the distance between foreground point features and increasing the distance between foreground and background point features within a region proposal,CCM aids the network in extracting more discriminative foreground and background feature representations without reliance on categorical annotations.Secondly,this thesis utilizes a geometric contrastive module(GCM),which enhances the network’s geometric perception capability by employing contrastive learning on the foreground point features associated with various basic geometric components,such as edges,corners,and surfaces,thereby enabling these geometric components to exhibit more distinguishable representations.This thesis also combines category-aware contrastive learning with former modules to maintain categorical distinctiveness.Extensive experimental results on FS-SUNRGBD and FS-ScanNet datasets demonstrate the effectiveness of this method with average precision exceeding the baseline by up to 8%.展开更多
基金Guangxi Key Laboratory of Spatial Information and Geomatics(21-238-21-12)Guangxi Young and Middle-aged Teachers’Research Fundamental Ability Enhancement Project(2023KY1196).
文摘Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金the National Natural Science Foundation of China(Nos.42176188,42176192)the Hainan Provincial Natural Science Foundation of China(No.421CXTD442)+2 种基金the Stable Supporting Fund of Acoustic Science and Technology Laboratory(No.JCKYS2024604SSJS007)the Fundamental Research Funds for the Central Universities(No.3072024CFJ0504)the Harbin Engineering University Doctoral Research and Innovation Fund(No.XK2050021034)。
文摘In the task of inspecting underwater suspended pipelines,multi-beam sonar(MBS)can provide two-dimensional water column images(WCIs).However,systematic interferences(e.g.,sidelobe effects)may induce misdetection in WCIs.To address this issue and improve the accuracy of detection,we developed a density-based clustering method for three-dimensional water column point clouds.During the processing of WCIs,sidelobe effects are mitigated using a bilateral filter and brightness transformation.The cross-sectional point cloud of the pipeline is then extracted by using the Canny operator.In the detection phase,the target is identified by using density-based spatial clustering of applications with noise(DBSCAN).However,the selection of appropriate DBSCAN parameters is obscured by the uneven distribution of the water column point cloud.To overcome this,we propose an improved DBSCAN based on a parameter interval estimation method(PIE-DBSCAN).First,kernel density estimation(KDE)is used to determine the candidate interval of parameters,after which the exact cluster number is determined via density peak clustering(DPC).Finally,the optimal parameters are selected by comparing the mean silhouette coefficients.To validate the performance of PIE-DBSCAN,we collected water column point clouds from an anechoic tank and the South China Sea.PIE-DBSCAN successfully detected both the target points of the suspended pipeline and non-target points on the seafloor surface.Compared to the K-Means and Mean-Shift algorithms,PIE-DBSCAN demonstrates superior clustering performance and shows feasibility in practical applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
基金This work was supported by National Nature Science Foundation of China(No.61811530281 and 61861136009)Guangdong Regional Joint Foundation(No.2019B1515120076)the Fundamental Research for the Central Universities.
文摘In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00399401,Development of Quantum-Safe Infrastructure Migration and Quantum Security Verification Technologies).
文摘With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金supported in part by the National Key R&D Program of China(Grant no.2016YFC0401908)。
文摘Landslides are one of the most disastrous geological hazards in southwestern China.Once a landslide becomes unstable,it threatens the lives and safety of local residents.However,empirical studies on landslides have predominantly focused on landslides that occur on land.To this end,we aim to investigate ashore and underwater landslide data synchronously.This study proposes an optimized mosaicking method for ashore and underwater landslide data.This method fuses an airborne laser point cloud with multi-beam depth sounder images.Owing to their relatively high efficiency and large coverage area,airborne laser measurement systems are suitable for emergency investigations of landslides.Based on the airborne laser point cloud,the traversal of the point with the lowest elevation value in the point set can be used to perform rapid extraction of the crude channel boundaries.Further meticulous extraction of the channel boundaries is then implemented using the probability mean value optimization method.In addition,synthesis of the integrated ashore and underwater landslide data angle is realized using the spatial guide line between the channel boundaries and the underwater multibeam sonar images.A landslide located on the right bank of the middle reaches of the Yalong River is selected as a case study to demonstrate that the proposed method has higher precision thantraditional methods.The experimental results show that the mosaicking method in this study can meet the basic needs of landslide modeling and provide a basis for qualitative and quantitative analysis and stability prediction of landslides.
文摘BIM(building information modelling)has gained wider acceptance in the A/E/C(architecture/engineering/construction)industry in the US and internationally.This paper presents current industry approaches of implementing 3D point cloud data in BIM and VDC(virtual design and construction)applications during various stages of a project life cycle and the challenges associated with processing the huge amount of 3D point cloud data.Conversion from discrete 3D point cloud raster data to geometric/vector BIM data remains to be a labor-intensive process.The needs for intelligent geometric feature detection/reconstruction algorithms for automated point cloud processing and issues related to data management are discussed.This paper also presents an innovative approach for integrating 3D point cloud data with BIM to efficiently augment built environment design,construction and management.
文摘With the continuous advancement of the tiered diagnosis and treatment system,the medical consortium model has gained increasing attention as an important approach to promoting the vertical integration of healthcare resources.Within this context,laboratory data,as a key component of healthcare information systems,urgently requires efficient sharing and intelligent analysis.This paper designs and constructs an intelligent early warning system for laboratory data based on a cloud platform tailored to the medical consortium model.Through standardized data formats and unified access interfaces,the system enables the integration and cleaning of laboratory data across multiple healthcare institutions.By combining medical rule sets with machine learning models,the system achieves graded alerts and rapid responses to abnormal key indicators and potential outbreaks of infectious diseases.Practical deployment results demonstrate that the system significantly improves the utilization efficiency of laboratory data,strengthens public health event monitoring,and optimizes inter-institutional collaboration.The paper also discusses challenges encountered during system implementation,such as inconsistent data standards,security and compliance concerns,and model interpretability,and proposes corresponding optimization strategies.These findings provide a reference for the broader application of intelligent medical early warning systems.
基金supported in part by the National Natural Science Foundation of China under Grant(62171257,U22B2001,U19A2052,62020106011,62061015)in part by the Natural Science Foundation of Chongqing under Grant(2023NSCQMSX2930)+1 种基金in part by the Youth Innovation Group Support Program of ICE Discipline of CQUPT under Grant(SCIE-QN-2022-05)in part by the Graduate Scientifc Research and Innovation Project of Chongqing under Grant(CYS22469).
文摘Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more complex.Secondly,the quality impairment generally involves both geometric attributes and color properties,where the measurement of the geometric distortion becomes more complex.We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System(HVS)and the intrinsic characteristics of the point cloud.The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling,and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors(KNN)clustering.For geometric distortion,the Point Feature Histogram(PFH)is extracted as the feature descriptor,and the Earth Mover’s Distance(EMD)between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement.For color distortion,the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement.Finally,the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement.The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time.Meanwhile,the experimental results also demonstrate the robustness of the proposed method across various distortion types.The source codes are available at https://github.com/llsurreal919/Point Cloud Quality Assessment.
基金supported by the National Key Research and Development Program of China(Grant Nos.2023YFC2907400 and 2021YFC2900500)the National Natural Science Foundation of China(Grant No.52074020).
文摘Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model.
基金National Key Research and Development Program of China,Grant/Award Number:2020YFB1711704。
文摘The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.
文摘Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.
基金the National Key R&D Program of China(No.2022YFB3707303)National Natural Science Foundation of China(No.52293471)。
文摘Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing distinct chain ends.Despite these structural differences,both types of polymers exhibit locally random-walk-like conformations,making it challenging to detect subtle spatial variations using conventional methods.In this study,we address this challenge by integrating molecular dynamics simulations with point cloud neural networks to analyze the spatial conformations of cyclic and linear polymers.By utilizing the Dynamic Graph CNN(DGCNN)model,we classify polymer conformations based on the 3D coordinates of monomers,capturing local and global topological differences without considering chain connectivity sequentiality.Our findings reveal that the optimal local structural feature unit size scales linearly with molecular weight,aligning with theoretical predictions.Additionally,interpretability techniques such as Grad-CAM and SHAP identify significant conformational differences:cyclic polymers tend to form prolate ellipsoid shapes with pronounced elongation along the major axis,while linear polymers show elongated ends with more spherical centers.These findings reveal subtle yet critical differences in local conformations between cyclic and linear polymers that were previously difficult to discern,providing deeper insights into polymer structure-property relationships and offering guidance for future polymer science advancements.
文摘Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the novel classes.Due to imbalanced training data,existing FS3D methods based on fully supervised learning can lead to overfitting toward base classes,which impairs the network’s ability to generalize knowledge learned from base classes to novel classes and also prevents the network from extracting distinctive foreground and background representations for novel class objects.To address these issues,this thesis proposes a category-agnostic contrastive learning approach,enhancing the generalization and identification abilities for almost unseen categories through the construction of pseudo-labels and positive-negative sample pairs unrelated to specific classes.Firstly,this thesis designs a proposal-wise context contrastive module(CCM).By reducing the distance between foreground point features and increasing the distance between foreground and background point features within a region proposal,CCM aids the network in extracting more discriminative foreground and background feature representations without reliance on categorical annotations.Secondly,this thesis utilizes a geometric contrastive module(GCM),which enhances the network’s geometric perception capability by employing contrastive learning on the foreground point features associated with various basic geometric components,such as edges,corners,and surfaces,thereby enabling these geometric components to exhibit more distinguishable representations.This thesis also combines category-aware contrastive learning with former modules to maintain categorical distinctiveness.Extensive experimental results on FS-SUNRGBD and FS-ScanNet datasets demonstrate the effectiveness of this method with average precision exceeding the baseline by up to 8%.