期刊文献+
共找到22,874篇文章
< 1 2 250 >
每页显示 20 50 100
Research on Airborne Point Cloud Data Registration Using Urban Buildings as an Example
1
作者 Yajun Fan Yujun Shi +1 位作者 Chengjie Su Kai Wang 《Journal of World Architecture》 2025年第4期35-42,共8页
Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)... Airborne LiDAR(Light Detection and Ranging)is an evolving high-tech active remote sensing technology that has the capability to acquire large-area topographic data and can quickly generate DEM(Digital Elevation Model)products.Combined with image data,this technology can further enrich and extract spatial geographic information.However,practically,due to the limited operating range of airborne LiDAR and the large area of task,it would be necessary to perform registration and stitching process on point clouds of adjacent flight strips.By eliminating grow errors,the systematic errors in the data need to be effectively reduced.Thus,this paper conducts research on point cloud registration methods in urban building areas,aiming to improve the accuracy and processing efficiency of airborne LiDAR data.Meanwhile,an improved post-ICP(Iterative Closest Point)point cloud registration method was proposed in this study to determine the accurate registration and efficient stitching of point clouds,which capable to provide a potential technical support for applicants in related field. 展开更多
关键词 Airborne LiDAR point cloud registration point cloud data processing Systematic error
在线阅读 下载PDF
Multi-sensor missile-borne LiDAR point cloud data augmentation based on Monte Carlo distortion simulation
2
作者 Luda Zhao Yihua Hu +4 位作者 Fei Han Zhenglei Dou Shanshan Li Yan Zhang Qilong Wu 《CAAI Transactions on Intelligence Technology》 2025年第1期300-316,共17页
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta... Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms. 展开更多
关键词 data augmentation LIDAR missile-borne imaging Monte Carlo simulation point cloud
在线阅读 下载PDF
Automatic identification of discontinuities and refined modeling of rock blocks from 3D point cloud data of rock surfaces
3
作者 Yaopeng Ji Shengyuan Song +5 位作者 Jianping Chen Jingyu Xue Jianhua Yan Yansong Zhang Di Sun Qing Wang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第5期3093-3106,共14页
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach... The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering. 展开更多
关键词 Three-dimensional(3D)point cloud Rock mass Automatic identification Refined modeling Unmanned aerial vehicle(UAV)
在线阅读 下载PDF
A New Encryption Mechanism Supporting the Update of Encrypted Data for Secure and Efficient Collaboration in the Cloud Environment
4
作者 Chanhyeong Cho Byeori Kim +1 位作者 Haehyun Cho Taek-Young Youn 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期813-834,共22页
With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud... With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks. 展开更多
关键词 cloud collaboration mode of operation data update efficiency
在线阅读 下载PDF
Construction of an Intelligent Early Warning System for a Cloud-Based Laboratory Data Platform under the Medical Consortium Model
5
作者 Jibiao Zhou 《Journal of Electronic Research and Application》 2025年第4期268-275,共8页
With the continuous advancement of the tiered diagnosis and treatment system,the medical consortium model has gained increasing attention as an important approach to promoting the vertical integration of healthcare re... With the continuous advancement of the tiered diagnosis and treatment system,the medical consortium model has gained increasing attention as an important approach to promoting the vertical integration of healthcare resources.Within this context,laboratory data,as a key component of healthcare information systems,urgently requires efficient sharing and intelligent analysis.This paper designs and constructs an intelligent early warning system for laboratory data based on a cloud platform tailored to the medical consortium model.Through standardized data formats and unified access interfaces,the system enables the integration and cleaning of laboratory data across multiple healthcare institutions.By combining medical rule sets with machine learning models,the system achieves graded alerts and rapid responses to abnormal key indicators and potential outbreaks of infectious diseases.Practical deployment results demonstrate that the system significantly improves the utilization efficiency of laboratory data,strengthens public health event monitoring,and optimizes inter-institutional collaboration.The paper also discusses challenges encountered during system implementation,such as inconsistent data standards,security and compliance concerns,and model interpretability,and proposes corresponding optimization strategies.These findings provide a reference for the broader application of intelligent medical early warning systems. 展开更多
关键词 Medical consortium Laboratory data cloud platform Intelligent early warning data standardization Machine learning
在线阅读 下载PDF
Perceptual point cloud quality assessment for immersive metaverse experience
6
作者 Baoping Cheng Lei Luo +2 位作者 Ziyang He Ce Zhu Xiaoming Tao 《Digital Communications and Networks》 2025年第3期806-817,共12页
Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more compl... Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task.Firstly,because point cloud is formed by unstructured 3D points that makes the topology more complex.Secondly,the quality impairment generally involves both geometric attributes and color properties,where the measurement of the geometric distortion becomes more complex.We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System(HVS)and the intrinsic characteristics of the point cloud.The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling,and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors(KNN)clustering.For geometric distortion,the Point Feature Histogram(PFH)is extracted as the feature descriptor,and the Earth Mover’s Distance(EMD)between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement.For color distortion,the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement.Finally,the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement.The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time.Meanwhile,the experimental results also demonstrate the robustness of the proposed method across various distortion types.The source codes are available at https://github.com/llsurreal919/Point Cloud Quality Assessment. 展开更多
关键词 Metaverse point cloud Quality assessment point feature histogram Earth mover’s distance
在线阅读 下载PDF
Identification and automatic recognition of discontinuities from 3D point clouds of rock mass exposure
7
作者 Peitao Wang Boran Huang +1 位作者 Yijun Gao Meifeng Cai 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第8期4982-5000,共19页
Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of disco... Mapping and analyzing rock mass discontinuities based on 3D(three-dimensional)point cloud(3DPC)is one of the most important work in the engineering geomechanical survey.To efficiently analyze the distribution of discontinuities,a self-developed code termed as the cloud-group-cluster(CGC)method based on MATLAB for mapping and detecting discontinuities based on the 3DPC was introduced.The identification and optimization of discontinuity groups were performed using three key parameters,i.e.K,θ,and f.A sensitivity analysis approach for identifying the optimal key parameters was introduced.The results show that the comprehensive analysis of the main discontinuity groups,mean orientations,and densities could be achieved automatically.The accuracy of the CGC method was validated using tetrahedral and hexahedral models.The 3D point cloud data were divided into three levels(point cloud,group,and cluster)for analysis,and this three-level distribution recognition was applied to natural rock surfaces.The densities and spacing information of the principal discontinuities were automatically detected using the CGC method.Five engineering case studies were conducted to validate the CGC method,showing the applicability in detecting rock discontinuities based on 3DPC model. 展开更多
关键词 Rock mass point cloud Rock discontinuities Semi-automatic detection
在线阅读 下载PDF
Point-PC:Point cloud completion guided by prior knowledge via causal inference
8
作者 Xuesong Gao Chuanqi Jiao +2 位作者 Ruidong Chen Weijie Wang Weizhi Nie 《CAAI Transactions on Intelligence Technology》 2025年第4期1007-1018,共12页
The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,direc... The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git. 展开更多
关键词 causal inference contrastive alignment memory network point cloud completion
在线阅读 下载PDF
Understanding Local Conformation in Cyclic and Linear Polymers Using Molecular Dynamics and Point Cloud Neural Network
9
作者 Wan-Chen Zhao Hai-Yang Huo +1 位作者 Zhong-Yuan Lu Zhao-Yan Sun 《Chinese Journal of Polymer Science》 2025年第5期695-710,共16页
Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing di... Understanding the conformational characteristics of polymers is key to elucidating their physical properties.Cyclic polymers,defined by their closed-loop structures,inherently differ from linear polymers possessing distinct chain ends.Despite these structural differences,both types of polymers exhibit locally random-walk-like conformations,making it challenging to detect subtle spatial variations using conventional methods.In this study,we address this challenge by integrating molecular dynamics simulations with point cloud neural networks to analyze the spatial conformations of cyclic and linear polymers.By utilizing the Dynamic Graph CNN(DGCNN)model,we classify polymer conformations based on the 3D coordinates of monomers,capturing local and global topological differences without considering chain connectivity sequentiality.Our findings reveal that the optimal local structural feature unit size scales linearly with molecular weight,aligning with theoretical predictions.Additionally,interpretability techniques such as Grad-CAM and SHAP identify significant conformational differences:cyclic polymers tend to form prolate ellipsoid shapes with pronounced elongation along the major axis,while linear polymers show elongated ends with more spherical centers.These findings reveal subtle yet critical differences in local conformations between cyclic and linear polymers that were previously difficult to discern,providing deeper insights into polymer structure-property relationships and offering guidance for future polymer science advancements. 展开更多
关键词 Molecular dynamics simulation point cloud Interpretable deep learning Conformational recognition
原文传递
A Category-Agnostic Hybrid Contrastive Learning Method for Few-Shot Point Cloud Object Detection
10
作者 Xuejing Li 《Computers, Materials & Continua》 2025年第5期1667-1681,共15页
Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the nove... Few-shot point cloud 3D object detection(FS3D)aims to identify and locate objects of novel classes within point clouds using knowledge acquired from annotated base classes and a minimal number of samples from the novel classes.Due to imbalanced training data,existing FS3D methods based on fully supervised learning can lead to overfitting toward base classes,which impairs the network’s ability to generalize knowledge learned from base classes to novel classes and also prevents the network from extracting distinctive foreground and background representations for novel class objects.To address these issues,this thesis proposes a category-agnostic contrastive learning approach,enhancing the generalization and identification abilities for almost unseen categories through the construction of pseudo-labels and positive-negative sample pairs unrelated to specific classes.Firstly,this thesis designs a proposal-wise context contrastive module(CCM).By reducing the distance between foreground point features and increasing the distance between foreground and background point features within a region proposal,CCM aids the network in extracting more discriminative foreground and background feature representations without reliance on categorical annotations.Secondly,this thesis utilizes a geometric contrastive module(GCM),which enhances the network’s geometric perception capability by employing contrastive learning on the foreground point features associated with various basic geometric components,such as edges,corners,and surfaces,thereby enabling these geometric components to exhibit more distinguishable representations.This thesis also combines category-aware contrastive learning with former modules to maintain categorical distinctiveness.Extensive experimental results on FS-SUNRGBD and FS-ScanNet datasets demonstrate the effectiveness of this method with average precision exceeding the baseline by up to 8%. 展开更多
关键词 Contrastive learning few-shot learning point cloud object detection
在线阅读 下载PDF
Adaptive Attribute-Based Honey Encryption: A Novel Solution for Cloud Data Security
11
作者 Reshma Siyal Muhammad Asim +4 位作者 Long Jun Mohammed Elaffendi Sundas Iftikhar Rana Alnashwan Samia Allaoua Chelloug 《Computers, Materials & Continua》 2025年第2期2637-2664,共28页
A basic procedure for transforming readable data into encoded forms is encryption, which ensures security when the right decryption keys are used. Hadoop is susceptible to possible cyber-attacks because it lacks built... A basic procedure for transforming readable data into encoded forms is encryption, which ensures security when the right decryption keys are used. Hadoop is susceptible to possible cyber-attacks because it lacks built-in security measures, even though it can effectively handle and store enormous datasets using the Hadoop Distributed File System (HDFS). The increasing number of data breaches emphasizes how urgently creative encryption techniques are needed in cloud-based big data settings. This paper presents Adaptive Attribute-Based Honey Encryption (AABHE), a state-of-the-art technique that combines honey encryption with Ciphertext-Policy Attribute-Based Encryption (CP-ABE) to provide improved data security. Even if intercepted, AABHE makes sure that sensitive data cannot be accessed by unauthorized parties. With a focus on protecting huge files in HDFS, the suggested approach achieves 98% security robustness and 95% encryption efficiency, outperforming other encryption methods including Ciphertext-Policy Attribute-Based Encryption (CP-ABE), Key-Policy Attribute-Based Encryption (KB-ABE), and Advanced Encryption Standard combined with Attribute-Based Encryption (AES+ABE). By fixing Hadoop’s security flaws, AABHE fortifies its protections against data breaches and enhances Hadoop’s dependability as a platform for processing and storing massive amounts of data. 展开更多
关键词 CYBERSECURITY data security cloud storage hadoop encryption and decryption privacy protection attribute-based honey encryption
在线阅读 下载PDF
A human-machine interaction method for rock discontinuities mapping by three-dimensional point clouds with noises
12
作者 Qian Chen Yunfeng Ge +3 位作者 Changdong Li Huiming Tang Geng Liu Weixiang Chen 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第3期1646-1663,共18页
Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results ca... Rock discontinuities control rock mechanical behaviors and significantly influence the stability of rock masses.However,existing discontinuity mapping algorithms are susceptible to noise,and the calculation results cannot be fed back to users timely.To address this issue,we proposed a human-machine interaction(HMI)method for discontinuity mapping.Users can help the algorithm identify the noise and make real-time result judgments and parameter adjustments.For this,a regular cube was selected to illustrate the workflows:(1)point cloud was acquired using remote sensing;(2)the HMI method was employed to select reference points and angle thresholds to detect group discontinuity;(3)individual discontinuities were extracted from the group discontinuity using a density-based cluster algorithm;and(4)the orientation of each discontinuity was measured based on a plane fitting algorithm.The method was applied to a well-studied highway road cut and a complex natural slope.The consistency of the computational results with field measurements demonstrates its good accuracy,and the average error in the dip direction and dip angle for both cases was less than 3.Finally,the computational time of the proposed method was compared with two other popular algorithms,and the reduction in computational time by tens of times proves its high computational efficiency.This method provides geologists and geological engineers with a new idea to map rapidly and accurately rock structures under large amounts of noises or unclear features. 展开更多
关键词 Rock discontinuities Three-dimensional(3D)point clouds Discontinuity identification Orientation measurement Human-machine interaction
在线阅读 下载PDF
Enhancing Healthcare Data Privacy in Cloud IoT Networks Using Anomaly Detection and Optimization with Explainable AI (ExAI)
13
作者 Jitendra Kumar Samriya Virendra Singh +4 位作者 Gourav Bathla Meena Malik Varsha Arya Wadee Alhalabi Brij B.Gupta 《Computers, Materials & Continua》 2025年第8期3893-3910,共18页
The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha... The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings. 展开更多
关键词 Healthcare data privacy analysis anomaly detection cloud IoT network explainable artificial intelligence temporal fuzzy network
在线阅读 下载PDF
PH-shape:an adaptive persistent homology-based approach for building outline extraction from ALS point cloud data
14
作者 Gefei Kong Hongchao Fan 《Geo-Spatial Information Science》 CSCD 2024年第4期1107-1117,共11页
Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some ... Building outline extraction from segmented point clouds is a critical step of building footprint generation.Existing methods for this task are often based on the convex hull and α-shape algorithm.There are also some methods using grids and Delaunay triangulation.The common challenge of these methods is the determination of proper parameters.While deep learning-based methods have shown promise in reducing the impact and dependence on parameter selection,their reliance on datasets with ground truth information limits the generalization of these methods.In this study,a novel unsupervised approach,called PH-shape,is proposed to address the aforementioned challenge.The methods of Persistence Homology(PH)and Fourier descriptor are introduced into the task of building outline extraction.The PH from the theory of topological data analysis supports the automatic and adaptive determination of proper buffer radius,thus enabling the parameter-adaptive extraction of building outlines through buffering and“inverse”buffering.The quantitative and qualitative experiment results on two datasets with different point densities demonstrate the effectiveness of the proposed approach in the face of various building types,interior boundaries,and the density variation in the point cloud data of one building.The PH-supported parameter adaptivity helps the proposed approach overcome the challenge of parameter determination and data variations and achieve reliable extraction of building outlines. 展开更多
关键词 Building outline extraction point cloud data persistent homology boundary tracing
原文传递
Design of a Private Cloud Platform for Distributed Logging Big Data Based on a Unified Learning Model of Physics and Data
15
作者 Cheng Xi Fu Haicheng Tursyngazy Mahabbat 《Applied Geophysics》 2025年第2期499-510,560,共13页
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th... Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity. 展开更多
关键词 Unified logging learning model logging big data private cloud machine learning
在线阅读 下载PDF
Data Aggregation Point Placement and Subnetwork Optimization for Smart Grids
16
作者 Tien-Wen Sung Wei Li +2 位作者 Chao-Yang Lee Yuzhen Chen Qingjun Fang 《Computers, Materials & Continua》 2025年第4期407-434,共28页
To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installa... To transmit customer power data collected by smart meters(SMs)to utility companies,data must first be transmitted to the corresponding data aggregation point(DAP)of the SM.The number of DAPs installed and the installation location greatly impact the whole network.For the traditional DAP placement algorithm,the number of DAPs must be set in advance,but determining the best number of DAPs is difficult,which undoubtedly reduces the overall performance of the network.Moreover,the excessive gap between the loads of different DAPs is also an important factor affecting the quality of the network.To address the above problems,this paper proposes a DAP placement algorithm,APSSA,based on the improved affinity propagation(AP)algorithm and sparrow search(SSA)algorithm,which can select the appropriate number of DAPs to be installed and the corresponding installation locations according to the number of SMs and their distribution locations in different environments.The algorithm adds an allocation mechanism to optimize the subnetwork in the SSA.APSSA is evaluated under three different areas and compared with other DAP placement algorithms.The experimental results validated that the method in this paper can reduce the network cost,shorten the average transmission distance,and reduce the load gap. 展开更多
关键词 Smart grid data aggregation point placement network cost average transmission distance load gap
在线阅读 下载PDF
Dynamic Multi-Objective Gannet Optimization(DMGO):An Adaptive Algorithm for Efficient Data Replication in Cloud Systems
17
作者 P.William Ved Prakash Mishra +3 位作者 Osamah Ibrahim Khalaf Arvind Mukundan Yogeesh N Riya Karmakar 《Computers, Materials & Continua》 2025年第9期5133-5156,共24页
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat... Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance. 展开更多
关键词 cloud computing data replication dynamic optimization multi-objective optimization gannet optimization algorithm adaptive algorithms resource efficiency SCALABILITY latency reduction energy-efficient computing
在线阅读 下载PDF
Accuracy assessment of cloud removal methods for Moderate-resolution Imaging Spectroradiometer(MODIS)snow data in the Tianshan Mountains,China
18
作者 WANG Qingxue MA Yonggang +1 位作者 XU Zhonglin LI Junli 《Journal of Arid Land》 2025年第4期457-480,共24页
Snow cover plays a critical role in global climate regulation and hydrological processes.Accurate monitoring is essential for understanding snow distribution patterns,managing water resources,and assessing the impacts... Snow cover plays a critical role in global climate regulation and hydrological processes.Accurate monitoring is essential for understanding snow distribution patterns,managing water resources,and assessing the impacts of climate change.Remote sensing has become a vital tool for snow monitoring,with the widely used Moderate-resolution Imaging Spectroradiometer(MODIS)snow products from the Terra and Aqua satellites.However,cloud cover often interferes with snow detection,making cloud removal techniques crucial for reliable snow product generation.This study evaluated the accuracy of four MODIS snow cover datasets generated through different cloud removal algorithms.Using real-time field camera observations from four stations in the Tianshan Mountains,China,this study assessed the performance of these datasets during three distinct snow periods:the snow accumulation period(September-November),snowmelt period(March-June),and stable snow period(December-February in the following year).The findings showed that cloud-free snow products generated using the Hidden Markov Random Field(HMRF)algorithm consistently outperformed the others,particularly under cloud cover,while cloud-free snow products using near-day synthesis and the spatiotemporal adaptive fusion method with error correction(STAR)demonstrated varying performance depending on terrain complexity and cloud conditions.This study highlighted the importance of considering terrain features,land cover types,and snow dynamics when selecting cloud removal methods,particularly in areas with rapid snow accumulation and melting.The results suggested that future research should focus on improving cloud removal algorithms through the integration of machine learning,multi-source data fusion,and advanced remote sensing technologies.By expanding validation efforts and refining cloud removal strategies,more accurate and reliable snow products can be developed,contributing to enhanced snow monitoring and better management of water resources in alpine and arid areas. 展开更多
关键词 real time camera cloud removal algorithm snow cover Moderate-resolution Imaging Spectroradiometer(MODIS)snow data snow monitoring
在线阅读 下载PDF
A LiDAR Point Clouds Dataset of Ships in a Maritime Environment
19
作者 Qiuyu Zhang Lipeng Wang +2 位作者 Hao Meng Wen Zhang Genghua Huang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第7期1681-1694,共14页
For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are ac... For the first time, this article introduces a LiDAR Point Clouds Dataset of Ships composed of both collected and simulated data to address the scarcity of LiDAR data in maritime applications. The collected data are acquired using specialized maritime LiDAR sensors in both inland waterways and wide-open ocean environments. The simulated data is generated by placing a ship in the LiDAR coordinate system and scanning it with a redeveloped Blensor that emulates the operation of a LiDAR sensor equipped with various laser beams. Furthermore,we also render point clouds for foggy and rainy weather conditions. To describe a realistic shipping environment, a dynamic tail wave is modeled by iterating the wave elevation of each point in a time series. Finally, networks serving small objects are migrated to ship applications by feeding our dataset. The positive effect of simulated data is described in object detection experiments, and the negative impact of tail waves as noise is verified in single-object tracking experiments. The Dataset is available at https://github.com/zqy411470859/ship_dataset. 展开更多
关键词 3D point clouds dataset dynamic tail wave fog simulation rainy simulation simulated data
在线阅读 下载PDF
Rock discontinuity extraction from 3D point clouds using pointwise clustering algorithm
20
作者 Xiaoyu Yi Wenxuan Wu +2 位作者 Wenkai Feng Yongjian Zhou Jiachen Zhao 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第7期4429-4444,共16页
Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected ... Recognizing discontinuities within rock masses is a critical aspect of rock engineering.The development of remote sensing technologies has significantly enhanced the quality and quantity of the point clouds collected from rock outcrops.In response,we propose a workflow that balances accuracy and efficiency to extract discontinuities from massive point clouds.The proposed method employs voxel filtering to downsample point clouds,constructs a point cloud topology using K-d trees,utilizes principal component analysis to calculate the point cloud normals,and employs the pointwise clustering(PWC)algorithm to extract discontinuities from rock outcrop point clouds.This method provides information on the location and orientation(dip direction and dip angle)of the discontinuities,and the modified whale optimization algorithm(MWOA)is utilized to identify major discontinuity sets and their average orientations.Performance evaluations based on three real cases demonstrate that the proposed method significantly reduces computational time costs without sacrificing accuracy.In particular,the method yields more reasonable extraction results for discontinuities with certain undulations.The presented approach offers a novel tool for efficiently extracting discontinuities from large-scale point clouds. 展开更多
关键词 Rock mass discontinuity 3D point clouds pointwise clustering(PWC)algorithm Modified whale optimization algorithm(MWOA)
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部