To comprehensively utilize the valuable geological map,exploration profile,borehole,and geochemical logging data and the knowledge on the formation of the Jinshan Ag-Au deposit for forecasting the exploration targets ...To comprehensively utilize the valuable geological map,exploration profile,borehole,and geochemical logging data and the knowledge on the formation of the Jinshan Ag-Au deposit for forecasting the exploration targets of concealed ore bodies,three-dimensional Mineral Prospectivity Modeling(MPM)of the deposit has been conducted using the weights-of-evidence(WofE)method.Conditional independence between evidence layers was tested,and the outline results using the prediction-volume(P-V)and Student's t-statistic methods for delineating favorable mineralization areas from continuous posterior probability map were critically compared.Four exploration targets delineated ultimately by the Student's t-statistic method for the discovery of minable ore bodies in each of the target areas were discussed in detail.The main conclusions include:(1)three-dimensional modeling of a deposit using multi-source reconnaissance data is useful for MPM in interpreting their relationships with known ore bodies;(2)WofE modeling can be used as a straightforward tool for integrating deposit model and reconnaissance data in MPM;(3)the Student's t-statistic method is more applicable in binarizing the continuous prospectivity map for exploration targeting than the PV approach;and(4)two target areas within high potential to find undiscovered ore bodies were diagnosed to guide future near-mine exploration activities of the Jinshan deposit.展开更多
Satellite Component Layout Optimization(SCLO) is crucial in satellite system design.This paper proposes a novel Satellite Three-Dimensional Component Assignment and Layout Optimization(3D-SCALO) problem tailored to en...Satellite Component Layout Optimization(SCLO) is crucial in satellite system design.This paper proposes a novel Satellite Three-Dimensional Component Assignment and Layout Optimization(3D-SCALO) problem tailored to engineering requirements, aiming to optimize satellite heat dissipation while considering constraints on static stability, 3D geometric relationships between components, and special component positions. The 3D-SCALO problem is a challenging bilevel combinatorial optimization task, involving the optimization of discrete component assignment variables in the outer layer and continuous component position variables in the inner layer,with both influencing each other. To address this issue, first, a Mixed Integer Programming(MIP) model is proposed, which reformulates the original bilevel problem into a single-level optimization problem, enabling the exploration of a more comprehensive optimization space while avoiding iterative nested optimization. Then, to model the 3D geometric relationships between components within the MIP framework, a linearized 3D Phi-function method is proposed, which handles non-overlapping and safety distance constraints between cuboid components in an explicit and effective way. Subsequently, the Finite-Rectangle Method(FRM) is proposed to manage 3D geometric constraints for complex-shaped components by approximating them with a finite set of cuboids, extending the applicability of the geometric modeling approach. Finally, the feasibility and effectiveness of the proposed MIP model are demonstrated through two numerical examples"and a real-world engineering case, which confirms its suitability for complex-shaped components and real engineering applications.展开更多
Currently,there are a limited number of dynamic models available for braided composite plates with large overall motions,despite the incorporation of three-dimensional(3D)braided composites into rotating blade compone...Currently,there are a limited number of dynamic models available for braided composite plates with large overall motions,despite the incorporation of three-dimensional(3D)braided composites into rotating blade components.In this paper,a dynamic model of 3D 4-directional braided composite thin plates considering braiding directions is established.Based on Kirchhoff's plate assumptions,the displacement variables of the plate are expressed.By incorporating the braiding directions into the constitutive equation of the braided composites,the dynamic model of the plate considering braiding directions is obtained.The effects of the speeds,braiding directions,and braided angles on the responses of the plate with fixed-axis rotation and translational motion,respectively,are investigated.This paper presents a dynamic theory for calculating the deformation of 3D braided composite structures undergoing both translational and rotational motions.It also provides a simulation method for investigating the dynamic behavior of non-isotropic material plates in various applications.展开更多
The development of digital twins for geotechnical structures necessitates the real-time updates of threedimensional(3D)virtual models(e.g.numerical finite element method(FEM)model)to accurately predict time-varying ge...The development of digital twins for geotechnical structures necessitates the real-time updates of threedimensional(3D)virtual models(e.g.numerical finite element method(FEM)model)to accurately predict time-varying geotechnical responses(e.g.consolidation settlement)in a 3D spatial domain.However,traditional 3D numerical model updating approaches are computationally prohibitive and therefore difficult to update the 3D responses in real time.To address these challenges,this study proposes a novel machine learning framework called sparse dictionary learning(T-3D-SDL)for real-time updating of time-varying 3D geotechnical responses.In T-3D-SDL,a concerned dataset(e.g.time-varying 3D settlement)is approximated as a linear superposition of dictionary atoms generated from 3D random FEM analyses.Field monitoring data are then used to identify non-trivial atoms and estimate their weights within a Bayesian framework for model updating and prediction.The proposed approach enables the real-time update of temporally varying settlements with a high 3D spatial resolution and quantified uncertainty as field monitoring data evolve.The proposed approach is illustrated using an embankment construction project.The results show that the proposed approach effectively improves settlement predictions along temporal and 3D spatial dimensions,with minimal latency(e.g.within minutes),as monitoring data appear.In addition,the proposed approach requires only a reasonably small number of 3D FEM model evaluations,avoids the use of widely adopted yet often criticized surrogate models,and effectively addresses the limitations(e.g.computational inefficiency)of existing 3D model updating approaches.展开更多
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i...Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.展开更多
This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models ...This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models.展开更多
Hypoxia is a typical feature of the tumor microenvironment,one of the most critical factors affecting cell behavior and tumor progression.However,the lack of tumor models able to precisely emulate natural brain tumor ...Hypoxia is a typical feature of the tumor microenvironment,one of the most critical factors affecting cell behavior and tumor progression.However,the lack of tumor models able to precisely emulate natural brain tumor tissue has impeded the study of the effects of hypoxia on the progression and growth of tumor cells.This study reports a three-dimensional(3D)brain tumor model obtained by encapsulating U87MG(U87)cells in a hydrogel containing type I collagen.It also documents the effect of various oxygen concentrations(1%,7%,and 21%)in the culture environment on U87 cell morphology,proliferation,viability,cell cycle,apoptosis rate,and migration.Finally,it compares two-dimensional(2D)and 3D cultures.For comparison purposes,cells cultured in flat culture dishes were used as the control(2D model).Cells cultured in the 3D model proliferated more slowly but had a higher apoptosis rate and proportion of cells in the resting phase(G0 phase)/gap I phase(G1 phase)than those cultured in the 2D model.Besides,the two models yielded significantly different cell morphologies.Finally,hypoxia(e.g.,1%O2)affected cell morphology,slowed cell growth,reduced cell viability,and increased the apoptosis rate in the 3D model.These results indicate that the constructed 3D model is effective for investigating the effects of biological and chemical factors on cell morphology and function,and can be more representative of the tumor microenvironment than 2D culture systems.The developed 3D glioblastoma tumor model is equally applicable to other studies in pharmacology and pathology.展开更多
The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific req...The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance.展开更多
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres...DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.展开更多
Internal multiples are commonly present in seismic data due to variations in velocity or density of subsurface media.They can reduce the signal-to-noise ratio of seismic data and degrade the quality of the image.With ...Internal multiples are commonly present in seismic data due to variations in velocity or density of subsurface media.They can reduce the signal-to-noise ratio of seismic data and degrade the quality of the image.With the development of seismic exploration into deep and ultradeep events,especially those from complex targets in the western region of China,the internal multiple eliminations become increasingly challenging.Currently,three-dimensional(3D)seismic data are primarily used for oil and gas target recognition and drilling.Effectively eliminating internal multiples in 3D seismic data of complex structures and mitigating their adverse effects is crucial for enhancing the success rate of drilling.In this study,we propose an internal multiple prediction algorithm for 3D seismic data in complex structures using the Marchenko autofocusing theory.This method can predict the accurate internal multiples of time difference without an accurate velocity model and the implementation process mainly consists of several steps.Firstly,simulating direct waves with a 3D macroscopic velocity model.Secondly,using direct waves and 3D full seismic acquisition records to obtain the upgoing and down-going Green's functions between the virtual source point and surface.Thirdly,constructing internal multiples of the relevant layers by upgoing and downgoing Green's functions.Finally,utilizing the adaptive matching subtraction method to remove predicted internal multiples from the original data to obtain seismic records without multiples.Compared with the two-dimensional(2D)Marchenko algo-rithm,the performance of the 3D Marchenko algorithm for internal multiple prediction has been significantly enhanced,resulting in higher computational accuracy.Numerical simulation test results indicate that our proposed method can effectively eliminate internal multiples in 3D seismic data,thereby exhibiting important theoretical and industrial application value.展开更多
Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has prov...Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability.展开更多
With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Alth...With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads.展开更多
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu...In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.展开更多
The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes...The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes in ENSO forecasts,resulting in significant progress.Most deep learning-based ENSO prediction models which primarily rely solely on reanalysis data may lead to challenges in intensity underestimation in long-term forecasts,reducing the forecasting skills.To this end,we propose a deep residual-coupled model prediction(Res-CMP)model,which integrates historical reanalysis data and coupled model forecast data for multiyear ENSO prediction.The Res-CMP model is designed as a lightweight model that leverages only short-term reanalysis data and nudging assimilation prediction results of the Community Earth System Model(CESM)for effective prediction of the Niño 3.4 index.We also developed a transfer learning strategy for this model to overcome the limitations of inadequate forecast data.After determining the optimal configuration,which included selecting a suitable transfer learning rate during training,along with input variables and CESM forecast lengths,Res-CMP demonstrated a high correlation ability for 19-month lead time predictions(correlation coefficients exceeding 0.5).The Res-CMP model also alleviated the spring predictability barrier(SPB).When validated against actual ENSO events,Res-CMP successfully captured the temporal evolution of the Niño 3.4 index during La Niña events(1998/99 and 2020/21)and El Niño events(2009/10 and 2015/16).Our proposed model has the potential to further enhance ENSO prediction performance by using coupled models to assist deep learning methods.展开更多
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from ...This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from a perspective of engineering course,especially of software engineering.As for evaluation method,relying on the behavioral data of students during their school years,we aim to construct the evaluation model as objective as possible,effectively weakening the negative impact of personal subjective assumptions on the evaluation results.展开更多
In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However...In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges.展开更多
Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this st...Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this study,a Localized Error Subspace Transform Kalman Filter is employed in a coupled climate system model(the Flexible Global Ocean–Atmosphere–Land System Model,version f3-L(FGOALS-f3-L))to assimilate sea-ice concentration(SIC)and sea-ice thickness(SIT)data for melting-season ice predictions.The scheme is applied through the following steps:(1)initialization for generating initial ensembles;(2)analysis for assimilating observed data;(3)adoption for dividing ice states into five thickness categories;(4)forecast for evolving the model;(5)resampling for updating model uncertainties.Several experiments were conducted to examine its results and impacts.Compared with the control experiment,the continuous assimilation experiments(CTNs)indicate assimilations improve model SICs and SITs persistently and generate realistic initials.Assimilating SIC+SIT data better corrects overestimated model SITs spatially than when only assimilating SIC data.The continuous assimilation restart experiments indicate the initials from the CTNs correct the overestimated marginal SICs and overall SITs remarkably well,as well as the cold biases in the oceanic and atmospheric models.The initials with SIC+SIT assimilated show more reasonable spatial improvements.Nevertheless,the SICs in the central Arctic undergo abnormal summer reductions,which is probably because overestimated SITs are reduced in the initials but the strong seasonal cycle(summer melting)biases are unchanged.Therefore,since systematic biases are complicated in a coupled system,for FGOALS-f3-L to make better ice predictions,oceanic and atmospheric assimilations are expected required.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
基金financially supported by the Ministry of Science and Technology of China(Nos.2022YFF0801201,2021YFC2900300)the National Natural Science Foundation of China(Nos.41872245,U1911202)the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515010666)。
文摘To comprehensively utilize the valuable geological map,exploration profile,borehole,and geochemical logging data and the knowledge on the formation of the Jinshan Ag-Au deposit for forecasting the exploration targets of concealed ore bodies,three-dimensional Mineral Prospectivity Modeling(MPM)of the deposit has been conducted using the weights-of-evidence(WofE)method.Conditional independence between evidence layers was tested,and the outline results using the prediction-volume(P-V)and Student's t-statistic methods for delineating favorable mineralization areas from continuous posterior probability map were critically compared.Four exploration targets delineated ultimately by the Student's t-statistic method for the discovery of minable ore bodies in each of the target areas were discussed in detail.The main conclusions include:(1)three-dimensional modeling of a deposit using multi-source reconnaissance data is useful for MPM in interpreting their relationships with known ore bodies;(2)WofE modeling can be used as a straightforward tool for integrating deposit model and reconnaissance data in MPM;(3)the Student's t-statistic method is more applicable in binarizing the continuous prospectivity map for exploration targeting than the PV approach;and(4)two target areas within high potential to find undiscovered ore bodies were diagnosed to guide future near-mine exploration activities of the Jinshan deposit.
基金supported by the National Natural Science Foundation of China(No.92371206)the Postgraduate Scientific Research Innovation Project of Hunan Province,China(No.CX2023063).
文摘Satellite Component Layout Optimization(SCLO) is crucial in satellite system design.This paper proposes a novel Satellite Three-Dimensional Component Assignment and Layout Optimization(3D-SCALO) problem tailored to engineering requirements, aiming to optimize satellite heat dissipation while considering constraints on static stability, 3D geometric relationships between components, and special component positions. The 3D-SCALO problem is a challenging bilevel combinatorial optimization task, involving the optimization of discrete component assignment variables in the outer layer and continuous component position variables in the inner layer,with both influencing each other. To address this issue, first, a Mixed Integer Programming(MIP) model is proposed, which reformulates the original bilevel problem into a single-level optimization problem, enabling the exploration of a more comprehensive optimization space while avoiding iterative nested optimization. Then, to model the 3D geometric relationships between components within the MIP framework, a linearized 3D Phi-function method is proposed, which handles non-overlapping and safety distance constraints between cuboid components in an explicit and effective way. Subsequently, the Finite-Rectangle Method(FRM) is proposed to manage 3D geometric constraints for complex-shaped components by approximating them with a finite set of cuboids, extending the applicability of the geometric modeling approach. Finally, the feasibility and effectiveness of the proposed MIP model are demonstrated through two numerical examples"and a real-world engineering case, which confirms its suitability for complex-shaped components and real engineering applications.
基金Project supported by the National Natural Science Foundation of China(Nos.12372071 and 12372070)the Aeronautical Science Fund of China(No.2022Z055052001)the Foundation of China Scholarship Council(No.202306830079)。
文摘Currently,there are a limited number of dynamic models available for braided composite plates with large overall motions,despite the incorporation of three-dimensional(3D)braided composites into rotating blade components.In this paper,a dynamic model of 3D 4-directional braided composite thin plates considering braiding directions is established.Based on Kirchhoff's plate assumptions,the displacement variables of the plate are expressed.By incorporating the braiding directions into the constitutive equation of the braided composites,the dynamic model of the plate considering braiding directions is obtained.The effects of the speeds,braiding directions,and braided angles on the responses of the plate with fixed-axis rotation and translational motion,respectively,are investigated.This paper presents a dynamic theory for calculating the deformation of 3D braided composite structures undergoing both translational and rotational motions.It also provides a simulation method for investigating the dynamic behavior of non-isotropic material plates in various applications.
基金supported by a grant from the Research Grant Council of Hong Kong Special Administrative Region(Project No.11207724).
文摘The development of digital twins for geotechnical structures necessitates the real-time updates of threedimensional(3D)virtual models(e.g.numerical finite element method(FEM)model)to accurately predict time-varying geotechnical responses(e.g.consolidation settlement)in a 3D spatial domain.However,traditional 3D numerical model updating approaches are computationally prohibitive and therefore difficult to update the 3D responses in real time.To address these challenges,this study proposes a novel machine learning framework called sparse dictionary learning(T-3D-SDL)for real-time updating of time-varying 3D geotechnical responses.In T-3D-SDL,a concerned dataset(e.g.time-varying 3D settlement)is approximated as a linear superposition of dictionary atoms generated from 3D random FEM analyses.Field monitoring data are then used to identify non-trivial atoms and estimate their weights within a Bayesian framework for model updating and prediction.The proposed approach enables the real-time update of temporally varying settlements with a high 3D spatial resolution and quantified uncertainty as field monitoring data evolve.The proposed approach is illustrated using an embankment construction project.The results show that the proposed approach effectively improves settlement predictions along temporal and 3D spatial dimensions,with minimal latency(e.g.within minutes),as monitoring data appear.In addition,the proposed approach requires only a reasonably small number of 3D FEM model evaluations,avoids the use of widely adopted yet often criticized surrogate models,and effectively addresses the limitations(e.g.computational inefficiency)of existing 3D model updating approaches.
基金supported by the National Natural Science Foundation of China(42250101)the Macao Foundation。
文摘Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.
基金sponsored by the U.S.Department of Housing and Urban Development(Grant No.NJLTS0027-22)The opinions expressed in this study are the authors alone,and do not represent the U.S.Depart-ment of HUD’s opinions.
文摘This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models.
基金supported by the National Natural Science Foundation of China (No. 52275291)the Fundamental Research Funds for the Central Universitiesthe Program for Innovation Team of Shaanxi Province,China (No. 2023-CX-TD-17)
文摘Hypoxia is a typical feature of the tumor microenvironment,one of the most critical factors affecting cell behavior and tumor progression.However,the lack of tumor models able to precisely emulate natural brain tumor tissue has impeded the study of the effects of hypoxia on the progression and growth of tumor cells.This study reports a three-dimensional(3D)brain tumor model obtained by encapsulating U87MG(U87)cells in a hydrogel containing type I collagen.It also documents the effect of various oxygen concentrations(1%,7%,and 21%)in the culture environment on U87 cell morphology,proliferation,viability,cell cycle,apoptosis rate,and migration.Finally,it compares two-dimensional(2D)and 3D cultures.For comparison purposes,cells cultured in flat culture dishes were used as the control(2D model).Cells cultured in the 3D model proliferated more slowly but had a higher apoptosis rate and proportion of cells in the resting phase(G0 phase)/gap I phase(G1 phase)than those cultured in the 2D model.Besides,the two models yielded significantly different cell morphologies.Finally,hypoxia(e.g.,1%O2)affected cell morphology,slowed cell growth,reduced cell viability,and increased the apoptosis rate in the 3D model.These results indicate that the constructed 3D model is effective for investigating the effects of biological and chemical factors on cell morphology and function,and can be more representative of the tumor microenvironment than 2D culture systems.The developed 3D glioblastoma tumor model is equally applicable to other studies in pharmacology and pathology.
文摘The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance.
文摘DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions.
文摘Internal multiples are commonly present in seismic data due to variations in velocity or density of subsurface media.They can reduce the signal-to-noise ratio of seismic data and degrade the quality of the image.With the development of seismic exploration into deep and ultradeep events,especially those from complex targets in the western region of China,the internal multiple eliminations become increasingly challenging.Currently,three-dimensional(3D)seismic data are primarily used for oil and gas target recognition and drilling.Effectively eliminating internal multiples in 3D seismic data of complex structures and mitigating their adverse effects is crucial for enhancing the success rate of drilling.In this study,we propose an internal multiple prediction algorithm for 3D seismic data in complex structures using the Marchenko autofocusing theory.This method can predict the accurate internal multiples of time difference without an accurate velocity model and the implementation process mainly consists of several steps.Firstly,simulating direct waves with a 3D macroscopic velocity model.Secondly,using direct waves and 3D full seismic acquisition records to obtain the upgoing and down-going Green's functions between the virtual source point and surface.Thirdly,constructing internal multiples of the relevant layers by upgoing and downgoing Green's functions.Finally,utilizing the adaptive matching subtraction method to remove predicted internal multiples from the original data to obtain seismic records without multiples.Compared with the two-dimensional(2D)Marchenko algo-rithm,the performance of the 3D Marchenko algorithm for internal multiple prediction has been significantly enhanced,resulting in higher computational accuracy.Numerical simulation test results indicate that our proposed method can effectively eliminate internal multiples in 3D seismic data,thereby exhibiting important theoretical and industrial application value.
基金partially supported by the National Natural Science Foundation of China(62271485)the SDHS Science and Technology Project(HS2023B044)
文摘Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability.
基金funded by the Joint Project of Industry-University-Research of Jiangsu Province(Grant:BY20231146).
文摘With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads.
基金partially supported by the National Natural Science Foundation of China(62161016)the Key Research and Development Project of Lanzhou Jiaotong University(ZDYF2304)+1 种基金the Beijing Engineering Research Center of Highvelocity Railway Broadband Mobile Communications(BHRC-2022-1)Beijing Jiaotong University。
文摘In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.
基金The National Key Research and Development Program of China under contract Nos 2024YFF0808900,2023YFF0805300,and 2020YFA0608804the Civilian Space Programme of China under contract No.D040305.
文摘The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes in ENSO forecasts,resulting in significant progress.Most deep learning-based ENSO prediction models which primarily rely solely on reanalysis data may lead to challenges in intensity underestimation in long-term forecasts,reducing the forecasting skills.To this end,we propose a deep residual-coupled model prediction(Res-CMP)model,which integrates historical reanalysis data and coupled model forecast data for multiyear ENSO prediction.The Res-CMP model is designed as a lightweight model that leverages only short-term reanalysis data and nudging assimilation prediction results of the Community Earth System Model(CESM)for effective prediction of the Niño 3.4 index.We also developed a transfer learning strategy for this model to overcome the limitations of inadequate forecast data.After determining the optimal configuration,which included selecting a suitable transfer learning rate during training,along with input variables and CESM forecast lengths,Res-CMP demonstrated a high correlation ability for 19-month lead time predictions(correlation coefficients exceeding 0.5).The Res-CMP model also alleviated the spring predictability barrier(SPB).When validated against actual ENSO events,Res-CMP successfully captured the temporal evolution of the Niño 3.4 index during La Niña events(1998/99 and 2020/21)and El Niño events(2009/10 and 2015/16).Our proposed model has the potential to further enhance ENSO prediction performance by using coupled models to assist deep learning methods.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
基金supported in part by the Education Reform Key Projects of Heilongjiang Province(Grant No.SJGZ20220011,SJGZ20220012)the Excellent Project of Ministry of Education and China Higher Education Association on Digital Ideological and Political Education in Universities(Grant No.GXSZSZJPXM001)。
文摘This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from a perspective of engineering course,especially of software engineering.As for evaluation method,relying on the behavioral data of students during their school years,we aim to construct the evaluation model as objective as possible,effectively weakening the negative impact of personal subjective assumptions on the evaluation results.
基金funding support from the National Natural Science Foundation of China(No.52204065,No.ZX20230398)supported by a grant from the Human Resources Development Program(No.20216110100070)of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)。
文摘In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges.
基金jointly funded by the National Natural Science Foundation of China(NSFC)[grant number 42130608]the China Postdoctoral Science Foundation[grant number 2024M753169]。
文摘Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this study,a Localized Error Subspace Transform Kalman Filter is employed in a coupled climate system model(the Flexible Global Ocean–Atmosphere–Land System Model,version f3-L(FGOALS-f3-L))to assimilate sea-ice concentration(SIC)and sea-ice thickness(SIT)data for melting-season ice predictions.The scheme is applied through the following steps:(1)initialization for generating initial ensembles;(2)analysis for assimilating observed data;(3)adoption for dividing ice states into five thickness categories;(4)forecast for evolving the model;(5)resampling for updating model uncertainties.Several experiments were conducted to examine its results and impacts.Compared with the control experiment,the continuous assimilation experiments(CTNs)indicate assimilations improve model SICs and SITs persistently and generate realistic initials.Assimilating SIC+SIT data better corrects overestimated model SITs spatially than when only assimilating SIC data.The continuous assimilation restart experiments indicate the initials from the CTNs correct the overestimated marginal SICs and overall SITs remarkably well,as well as the cold biases in the oceanic and atmospheric models.The initials with SIC+SIT assimilated show more reasonable spatial improvements.Nevertheless,the SICs in the central Arctic undergo abnormal summer reductions,which is probably because overestimated SITs are reduced in the initials but the strong seasonal cycle(summer melting)biases are unchanged.Therefore,since systematic biases are complicated in a coupled system,for FGOALS-f3-L to make better ice predictions,oceanic and atmospheric assimilations are expected required.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.