Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected ...Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technolog...In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technology and BIM(Building Information Modeling)model was discussed.Focused on the efficient acquisition of building geometric information using the fast-developing 3D point cloud technology,an improved deep learning-based 3D point cloud recognition method was proposed.The method optimised the network structure based on RandLA-Net to adapt to the large-scale point cloud processing requirements,while the semantic and instance features of the point cloud were integrated to significantly improve the recognition accuracy and provide a precise basis for BIM model remodeling.In addition,a visual BIM model generation system was developed,which systematically transformed the point cloud recognition results into BIM component parameters,automatically constructed BIM models,and promoted the open sharing and secondary development of models.The research results not only effectively promote the automation process of converting 3D point cloud data to refined BIM models,but also provide important technical support for promoting building informatisation and accelerating the construction of smart cities,showing a wide range of application potential and practical value.展开更多
In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is...In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
This paper presents a method for hand gesture recognition based on 3D point cloud. Digital image processing technology is used in this research. Based on the 3D point from depth camera, the system firstly extracts som...This paper presents a method for hand gesture recognition based on 3D point cloud. Digital image processing technology is used in this research. Based on the 3D point from depth camera, the system firstly extracts some raw data of the hand. After the data segmentation and preprocessing, three kinds of appearance features are extracted, including the number of stretched fingers, the angles between fingers and the gesture region’s area distribution feature. Based on these features, the system implements the identification of the gestures by using decision tree method. The results of experiment demonstrate that the proposed method is pretty efficient to recognize common gestures with a high accuracy.展开更多
Generating selfie images on the surface of a celestial body poses several challenges,including the position of the robotic arm,camera field of view,and limited shooting time.To address these challenges,the PCMIS(3D Po...Generating selfie images on the surface of a celestial body poses several challenges,including the position of the robotic arm,camera field of view,and limited shooting time.To address these challenges,the PCMIS(3D Point Cloud Matching Based Image Stitching)algorithm is designed,along with a corresponding shooting plan.This algorithm establishes a correspondence between depth and color information,enabling the generation of stitching views under any given view parameter.Furthermore,the algorithm is accelerated using GPU processing,resulting in a significant reduction in stitching time.The algorithm is successfully applied to generate selfie images for the Chang'e-5 mission.展开更多
Three-dimensional (3D) point cloud information hiding algorithms are mainly concentrated in the spatialdomain. Existing spatial domain steganalysis algorithms are subject to more disturbing factors during the analysis...Three-dimensional (3D) point cloud information hiding algorithms are mainly concentrated in the spatialdomain. Existing spatial domain steganalysis algorithms are subject to more disturbing factors during the analysisand detection process, and can only be applied to 3D mesh objects, so there is a lack of steganalysis algorithms for 3Dpoint cloud objects. To change the fact that steganalysis is limited to 3D mesh and eliminate the redundant featuresin the 3D mesh steganalysis feature set, we propose a 3D point cloud steganalysis algorithm based on compositeoperator feature enhancement. First, the 3D point cloud is normalized and smoothed. Second, the feature pointsthat may contain secret information in 3D point clouds and their neighboring points are extracted as the featureenhancement region by the improved 3DHarris-ISS composite operator. Feature enhancement is performed in thefeature enhancement region to form a feature-enhanced 3D point cloud, which highlights the feature points whilesuppressing the interference created by the rest of the vertices. Third, the existing 3D mesh feature set is screenedto reduce the data redundancy of more relevant features, and the newly proposed local neighborhood feature setis added to the screened feature set to form the 3D point cloud steganography feature set POINT72. Finally,the steganographic features are extracted from the enhanced 3D point cloud using the POINT72 feature set, andsteganalysis experiments are carried out. Experimental analysis shows that the algorithm can accurately analyzethe 3D point cloud’s spatial steganography and determine whether the 3D point cloud contains hidden information,so the accuracy of 3D point cloud steganalysis, under the prerequisite of missing edge and face information, is closeto that of the existing 3D mesh steganalysis algorithms.展开更多
This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing proc...This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing procedure involves:(1)computing the point normal for HSV-rendering of point cloud;(2)automatically clustering the discontinuity sets;(3)extracting the set-based point clouds;(4)estimating of set-based mean orientation,spacing,and persistence;(5)identifying the block-forming arrays of discontinuity sets for the assessment of stability.The effectiveness of our rock structure processing has been proved by 3D distinct element back analysis.The results show that Sf M modelling and rock structure computing provides enormous cost,time and safety incentives in standard engineering practice.展开更多
Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca...Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac...For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.展开更多
In this paper,we propose a novel and effective approach,namely GridNet,to hierarchically learn deep representation of 3D point clouds.It incorporates the ability of regular holistic description and fast data processin...In this paper,we propose a novel and effective approach,namely GridNet,to hierarchically learn deep representation of 3D point clouds.It incorporates the ability of regular holistic description and fast data processing in a single framework,which is able to abstract powerful features progressively in an efficient way.Moreover,to capture more accurate internal geometry attributes,anchors are inferred within local neighborhoods,in contrast to the fixed or the sampled ones used in existing methods,and the learned features are thus more representative and discriminative to local point distribution.GridNet delivers very competitive results compared with the state of the art methods in both the object classification and segmentation tasks.展开更多
LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compress...LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compressed data formats(e.g.LAZ)have been proposed over the years.These formats are capable of transparently decompressing portions of the data,but they are not focused on solving general queries over the data.In contrast to that traditional approach,a new recent research line focuses on designing data structures that combine compression and indexation,allowing directly querying the compressed data.Compression is used to fit the data structure in main memory all the time,thus getting rid of disk accesses,and indexation is used to query the compressed data as fast as querying the uncompressed data.In this paper,we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values.Our method is able to run range queries and attribute queries up to 100 times faster than previous methods.展开更多
The centroid coordinate serves as a critical control parameter in motion systems,including aircraft,missiles,rockets,and drones,directly influencing their motion dynamics and control performance.Traditional methods fo...The centroid coordinate serves as a critical control parameter in motion systems,including aircraft,missiles,rockets,and drones,directly influencing their motion dynamics and control performance.Traditional methods for centroid measurement often necessitate custom equipment and specialized positioning devices,leading to high costs and limited accuracy.Here,we present a centroid measurement method that integrates 3D scanning technology,enabling accurate measurement of centroid across various types of objects without the need for specialized positioning fixtures.A theoretical framework for centroid measurement was established,which combined the principle of the multi-point weighing method with 3D scanning technology.The measurement accuracy was evaluated using a designed standard component.Experimental results demonstrate that the discrepancies between the theoretical and the measured centroid of a standard component with various materials and complex shapes in the X,Y,and Z directions are 0.003 mm,0.009 mm,and 0.105 mm,respectively,yielding a spatial deviation of 0.106 mm.Qualitative verification was conducted through experimental validation of three distinct types.They confirmed the reliability of the proposed method,which allowed for accurate centroid measurements of various products without requiring positioning fixtures.This advancement significantly broadened the applicability and scope of centroid measurement devices,offering new theoretical insights and methodologies for the measurement of complex parts and systems.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.42407232)the Sichuan Science and Technology Program(Grant No.2024NSFSC0826).
文摘Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
文摘In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technology and BIM(Building Information Modeling)model was discussed.Focused on the efficient acquisition of building geometric information using the fast-developing 3D point cloud technology,an improved deep learning-based 3D point cloud recognition method was proposed.The method optimised the network structure based on RandLA-Net to adapt to the large-scale point cloud processing requirements,while the semantic and instance features of the point cloud were integrated to significantly improve the recognition accuracy and provide a precise basis for BIM model remodeling.In addition,a visual BIM model generation system was developed,which systematically transformed the point cloud recognition results into BIM component parameters,automatically constructed BIM models,and promoted the open sharing and secondary development of models.The research results not only effectively promote the automation process of converting 3D point cloud data to refined BIM models,but also provide important technical support for promoting building informatisation and accelerating the construction of smart cities,showing a wide range of application potential and practical value.
基金This work was supported by National Nature Science Foundation of China(No.61811530281 and 61861136009)Guangdong Regional Joint Foundation(No.2019B1515120076)the Fundamental Research for the Central Universities.
文摘In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
文摘This paper presents a method for hand gesture recognition based on 3D point cloud. Digital image processing technology is used in this research. Based on the 3D point from depth camera, the system firstly extracts some raw data of the hand. After the data segmentation and preprocessing, three kinds of appearance features are extracted, including the number of stretched fingers, the angles between fingers and the gesture region’s area distribution feature. Based on these features, the system implements the identification of the gestures by using decision tree method. The results of experiment demonstrate that the proposed method is pretty efficient to recognize common gestures with a high accuracy.
基金supported by the Leading Goose Research and Development Program of Zhejiang Province of China under Grant No.2024C01103.
文摘Generating selfie images on the surface of a celestial body poses several challenges,including the position of the robotic arm,camera field of view,and limited shooting time.To address these challenges,the PCMIS(3D Point Cloud Matching Based Image Stitching)algorithm is designed,along with a corresponding shooting plan.This algorithm establishes a correspondence between depth and color information,enabling the generation of stitching views under any given view parameter.Furthermore,the algorithm is accelerated using GPU processing,resulting in a significant reduction in stitching time.The algorithm is successfully applied to generate selfie images for the Chang'e-5 mission.
基金supported by the National Natural Science Foundation of China(No.62372062)。
文摘Three-dimensional (3D) point cloud information hiding algorithms are mainly concentrated in the spatialdomain. Existing spatial domain steganalysis algorithms are subject to more disturbing factors during the analysisand detection process, and can only be applied to 3D mesh objects, so there is a lack of steganalysis algorithms for 3Dpoint cloud objects. To change the fact that steganalysis is limited to 3D mesh and eliminate the redundant featuresin the 3D mesh steganalysis feature set, we propose a 3D point cloud steganalysis algorithm based on compositeoperator feature enhancement. First, the 3D point cloud is normalized and smoothed. Second, the feature pointsthat may contain secret information in 3D point clouds and their neighboring points are extracted as the featureenhancement region by the improved 3DHarris-ISS composite operator. Feature enhancement is performed in thefeature enhancement region to form a feature-enhanced 3D point cloud, which highlights the feature points whilesuppressing the interference created by the rest of the vertices. Third, the existing 3D mesh feature set is screenedto reduce the data redundancy of more relevant features, and the newly proposed local neighborhood feature setis added to the screened feature set to form the 3D point cloud steganography feature set POINT72. Finally,the steganographic features are extracted from the enhanced 3D point cloud using the POINT72 feature set, andsteganalysis experiments are carried out. Experimental analysis shows that the algorithm can accurately analyzethe 3D point cloud’s spatial steganography and determine whether the 3D point cloud contains hidden information,so the accuracy of 3D point cloud steganalysis, under the prerequisite of missing edge and face information, is closeto that of the existing 3D mesh steganalysis algorithms.
基金supported by the National Innovation Research Group Science Fund(No.41521002)the National Key Research and Development Program of China(No.2018YFC1505202)。
文摘This paper introduces the use of point cloud processing for extracting 3D rock structure and the 3DEC-related reconstruction of slope failure,based on a case study of the 2019 Pinglu rockfall.The basic processing procedure involves:(1)computing the point normal for HSV-rendering of point cloud;(2)automatically clustering the discontinuity sets;(3)extracting the set-based point clouds;(4)estimating of set-based mean orientation,spacing,and persistence;(5)identifying the block-forming arrays of discontinuity sets for the assessment of stability.The effectiveness of our rock structure processing has been proved by 3D distinct element back analysis.The results show that Sf M modelling and rock structure computing provides enormous cost,time and safety incentives in standard engineering practice.
基金supported by the National Key R&D Program of China(No.2023YFC3081200)the National Natural Science Foundation of China(No.42077264)the Scientific Research Project of PowerChina Huadong Engineering Corporation Limited(HDEC-2022-0301).
文摘Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金supported by the National Natural Science Foundation of China (62173103)the Fundamental Research Funds for the Central Universities of China (3072022JC0402,3072022JC0403)。
文摘For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset.
基金This work was supported by the National Natural Science Foundation of China(Grant No.61673033).
文摘In this paper,we propose a novel and effective approach,namely GridNet,to hierarchically learn deep representation of 3D point clouds.It incorporates the ability of regular holistic description and fast data processing in a single framework,which is able to abstract powerful features progressively in an efficient way.Moreover,to capture more accurate internal geometry attributes,anchors are inferred within local neighborhoods,in contrast to the fixed or the sampled ones used in existing methods,and the learned features are thus more representative and discriminative to local point distribution.GridNet delivers very competitive results compared with the state of the art methods in both the object classification and segmentation tasks.
文摘LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compressed data formats(e.g.LAZ)have been proposed over the years.These formats are capable of transparently decompressing portions of the data,but they are not focused on solving general queries over the data.In contrast to that traditional approach,a new recent research line focuses on designing data structures that combine compression and indexation,allowing directly querying the compressed data.Compression is used to fit the data structure in main memory all the time,thus getting rid of disk accesses,and indexation is used to query the compressed data as fast as querying the uncompressed data.In this paper,we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values.Our method is able to run range queries and attribute queries up to 100 times faster than previous methods.
基金supported by National Natural Science Foundation of China(No.52176122).
文摘The centroid coordinate serves as a critical control parameter in motion systems,including aircraft,missiles,rockets,and drones,directly influencing their motion dynamics and control performance.Traditional methods for centroid measurement often necessitate custom equipment and specialized positioning devices,leading to high costs and limited accuracy.Here,we present a centroid measurement method that integrates 3D scanning technology,enabling accurate measurement of centroid across various types of objects without the need for specialized positioning fixtures.A theoretical framework for centroid measurement was established,which combined the principle of the multi-point weighing method with 3D scanning technology.The measurement accuracy was evaluated using a designed standard component.Experimental results demonstrate that the discrepancies between the theoretical and the measured centroid of a standard component with various materials and complex shapes in the X,Y,and Z directions are 0.003 mm,0.009 mm,and 0.105 mm,respectively,yielding a spatial deviation of 0.106 mm.Qualitative verification was conducted through experimental validation of three distinct types.They confirmed the reliability of the proposed method,which allowed for accurate centroid measurements of various products without requiring positioning fixtures.This advancement significantly broadened the applicability and scope of centroid measurement devices,offering new theoretical insights and methodologies for the measurement of complex parts and systems.