期刊文献+
共找到30,806篇文章
< 1 2 250 >
每页显示 20 50 100
Influence of different data selection criteria on internal geomagnetic field modeling 被引量:4
1
作者 HongBo Yao JuYuan Xu +3 位作者 Yi Jiang Qing Yan Liang Yin PengFei Liu 《Earth and Planetary Physics》 2025年第3期541-549,共9页
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i... Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications. 展开更多
关键词 Macao Science Satellite-1 SWARM geomagnetic field modeling data selection core field crustal field
在线阅读 下载PDF
Smart cities,smart systems:A comprehensive review of system dynamics model applications in urban studies in the big data era 被引量:1
2
作者 Gift Fabolude Charles Knoble +1 位作者 Anvy Vu Danlin Yu 《Geography and Sustainability》 2025年第1期25-36,共12页
This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models ... This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models. 展开更多
关键词 Urban sustainability Smart cities System dynamics models Big data analytics Urban system complexity data-driven urbanism
在线阅读 下载PDF
Designing a Comprehensive Data Governance Maturity Model for Kenya Ministry of Defence
3
作者 Gilly Gitahi Gathogo Simon Maina Karume Josphat Karani 《Journal of Information Security》 2025年第1期44-69,共26页
The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific req... The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance. 展开更多
关键词 data Governance Maturity model Maturity Index Kenya Ministry of Defence Key Performance Indicators data Security Regulations
在线阅读 下载PDF
Gene Expression Data Analysis Based on Mixed Effects Model
4
作者 Yuanbo Dai 《Journal of Computer and Communications》 2025年第2期223-235,共13页
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres... DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions. 展开更多
关键词 Mixed Effects model Gene Expression data Analysis Gene Analysis Gene Chip
暂未订购
A Diffusion Model for Traffic Data Imputation
5
作者 Bo Lu Qinghai Miao +5 位作者 Yahui Liu Tariku Sinshaw Tamir Hongxia Zhao Xiqiao Zhang Yisheng Lv Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期606-617,共12页
Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has prov... Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability. 展开更多
关键词 data imputation diffusion model implicit feature time series traffic data
在线阅读 下载PDF
Modeling and Performance Evaluation of Streaming Data Processing System in IoT Architecture
6
作者 Feng Zhu Kailin Wu Jie Ding 《Computers, Materials & Continua》 2025年第5期2573-2598,共26页
With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Alth... With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads. 展开更多
关键词 System modeling performance evaluation streaming data process IoT system PEPA
在线阅读 下载PDF
Data Gathering Based on Hybrid Energy Efficient Clustering Algorithm and DCRNN Model in Wireless Sensor Network
7
作者 Li Cuiran Liu Shuqi +1 位作者 Xie Jianli Liu Li 《China Communications》 2025年第3期115-131,共17页
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu... In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay. 展开更多
关键词 CLUSTERING data gathering DCRNN model network lifetime wireless sensor network
在线阅读 下载PDF
A deep residual intelligent model for ENSO prediction by incorporating coupled model forecast data
8
作者 Chunyang Song Xuefeng Zhang +3 位作者 Xingrong Chen Hua Jiang Liang Zhang Yongyong Huang 《Acta Oceanologica Sinica》 2025年第8期133-142,共10页
The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes... The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes in ENSO forecasts,resulting in significant progress.Most deep learning-based ENSO prediction models which primarily rely solely on reanalysis data may lead to challenges in intensity underestimation in long-term forecasts,reducing the forecasting skills.To this end,we propose a deep residual-coupled model prediction(Res-CMP)model,which integrates historical reanalysis data and coupled model forecast data for multiyear ENSO prediction.The Res-CMP model is designed as a lightweight model that leverages only short-term reanalysis data and nudging assimilation prediction results of the Community Earth System Model(CESM)for effective prediction of the Niño 3.4 index.We also developed a transfer learning strategy for this model to overcome the limitations of inadequate forecast data.After determining the optimal configuration,which included selecting a suitable transfer learning rate during training,along with input variables and CESM forecast lengths,Res-CMP demonstrated a high correlation ability for 19-month lead time predictions(correlation coefficients exceeding 0.5).The Res-CMP model also alleviated the spring predictability barrier(SPB).When validated against actual ENSO events,Res-CMP successfully captured the temporal evolution of the Niño 3.4 index during La Niña events(1998/99 and 2020/21)and El Niño events(2009/10 and 2015/16).Our proposed model has the potential to further enhance ENSO prediction performance by using coupled models to assist deep learning methods. 展开更多
关键词 ENSO prediction deep learning dynamical coupled model data incorporating
在线阅读 下载PDF
Controlling update distance and enhancing fair trainable prototypes in federated learning under data and model heterogeneity
9
作者 Kangning Yin Zhen Ding +1 位作者 Xinhui Ji Zhiguo Wang 《Defence Technology(防务技术)》 2025年第5期15-31,共17页
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t... Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios. 展开更多
关键词 Heterogeneous federated learning model heterogeneity data heterogeneity Contrastive learning
在线阅读 下载PDF
Research on the Evaluation Model of Software Talent Cultivation Based on Multivariant Data Fusion
10
作者 Yin Chen Haoxuan Tang +4 位作者 Lei Zhang Tonghua Su Zhongjie Wang Ruihan Hu Shanli Xie 《计算机教育》 2025年第3期130-137,共8页
This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from ... This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from a perspective of engineering course,especially of software engineering.As for evaluation method,relying on the behavioral data of students during their school years,we aim to construct the evaluation model as objective as possible,effectively weakening the negative impact of personal subjective assumptions on the evaluation results. 展开更多
关键词 Quality evaluation model Software talent cultivation Behavioral data
在线阅读 下载PDF
Efficient deep-learning-based surrogate model for reservoir production optimization using transfer learning and multi-fidelity data
11
作者 Jia-Wei Cui Wen-Yue Sun +2 位作者 Hoonyoung Jeong Jun-Rong Liu Wen-Xin Zhou 《Petroleum Science》 2025年第4期1736-1756,共21页
In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However... In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges. 展开更多
关键词 Subsurface flow simulation Surrogate model Transfer learning Multi-fidelity training data Production optimization
原文传递
Employment of an Arctic sea-ice data assimilation scheme in the coupled climate system model FGOALS-f3-L and its preliminary results
12
作者 Yuyang Guo Yongqiang Yu Jiping Liu 《Atmospheric and Oceanic Science Letters》 2025年第4期27-34,共8页
Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this st... Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this study,a Localized Error Subspace Transform Kalman Filter is employed in a coupled climate system model(the Flexible Global Ocean–Atmosphere–Land System Model,version f3-L(FGOALS-f3-L))to assimilate sea-ice concentration(SIC)and sea-ice thickness(SIT)data for melting-season ice predictions.The scheme is applied through the following steps:(1)initialization for generating initial ensembles;(2)analysis for assimilating observed data;(3)adoption for dividing ice states into five thickness categories;(4)forecast for evolving the model;(5)resampling for updating model uncertainties.Several experiments were conducted to examine its results and impacts.Compared with the control experiment,the continuous assimilation experiments(CTNs)indicate assimilations improve model SICs and SITs persistently and generate realistic initials.Assimilating SIC+SIT data better corrects overestimated model SITs spatially than when only assimilating SIC data.The continuous assimilation restart experiments indicate the initials from the CTNs correct the overestimated marginal SICs and overall SITs remarkably well,as well as the cold biases in the oceanic and atmospheric models.The initials with SIC+SIT assimilated show more reasonable spatial improvements.Nevertheless,the SICs in the central Arctic undergo abnormal summer reductions,which is probably because overestimated SITs are reduced in the initials but the strong seasonal cycle(summer melting)biases are unchanged.Therefore,since systematic biases are complicated in a coupled system,for FGOALS-f3-L to make better ice predictions,oceanic and atmospheric assimilations are expected required. 展开更多
关键词 Arctic sea ice data assimilation Coupled climate system model FGOALS-f3-L
在线阅读 下载PDF
Design of a Private Cloud Platform for Distributed Logging Big Data Based on a Unified Learning Model of Physics and Data
13
作者 Cheng Xi Fu Haicheng Tursyngazy Mahabbat 《Applied Geophysics》 2025年第2期499-510,560,共13页
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th... Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity. 展开更多
关键词 Unified logging learning model logging big data private cloud machine learning
在线阅读 下载PDF
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
14
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm data intervals clustering Feature selection Application of artificial intelligence in distillation industry data-driven modelling
在线阅读 下载PDF
KSKV:Key-Strategy for Key-Value Data Collection with Local Differential Privacy
15
作者 Dan Zhao Yang You +2 位作者 Chuanwen Luo Ting Chen Yang Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3063-3083,共21页
In recent years,the research field of data collection under local differential privacy(LDP)has expanded its focus fromelementary data types to includemore complex structural data,such as set-value and graph data.Howev... In recent years,the research field of data collection under local differential privacy(LDP)has expanded its focus fromelementary data types to includemore complex structural data,such as set-value and graph data.However,our comprehensive review of existing literature reveals that there needs to be more studies that engage with key-value data collection.Such studies would simultaneously collect the frequencies of keys and the mean of values associated with each key.Additionally,the allocation of the privacy budget between the frequencies of keys and the means of values for each key does not yield an optimal utility tradeoff.Recognizing the importance of obtaining accurate key frequencies and mean estimations for key-value data collection,this paper presents a novel framework:the Key-Strategy Framework forKey-ValueDataCollection under LDP.Initially,theKey-StrategyUnary Encoding(KS-UE)strategy is proposed within non-interactive frameworks for the purpose of privacy budget allocation to achieve precise key frequencies;subsequently,the Key-Strategy Generalized Randomized Response(KS-GRR)strategy is introduced for interactive frameworks to enhance the efficiency of collecting frequent keys through group-anditeration methods.Both strategies are adapted for scenarios in which users possess either a single or multiple key-value pairs.Theoretically,we demonstrate that the variance of KS-UE is lower than that of existing methods.These claims are substantiated through extensive experimental evaluation on real-world datasets,confirming the effectiveness and efficiency of the KS-UE and KS-GRR strategies. 展开更多
关键词 key-value local differential privacy frequency estimation mean estimation data perturbation
在线阅读 下载PDF
Data-Enhanced Low-Cycle Fatigue Life Prediction Model Based on Nickel-Based Superalloys
16
作者 Luopeng Xu Lei Xiong +5 位作者 Rulun Zhang Jiajun Zheng Huawei Zou Zhixin Li Xiaopeng Wang Qingyuan Wang 《Acta Mechanica Solida Sinica》 2025年第4期612-623,共12页
To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation me... To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation method.This method utilizes a variational autoencoder(VAE)to generate low-cycle fatigue data and form an augmented dataset.The Pearson correlation coefficient(PCC)is employed to verify the similarity of feature distributions between the original and augmented datasets.Six machine learning models,namely random forest(RF),artificial neural network(ANN),support vector machine(SVM),gradient-boosted decision tree(GBDT),eXtreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost),are utilized to predict the LCF life of nickel-based superalloys.Results indicate that the proposed data augmentation method based on VAE can effectively expand the dataset,and the mean absolute error(MAE),root mean square error(RMSE),and R-squared(R^(2))values achieved using the CatBoost model,with respective values of 0.0242,0.0391,and 0.9538,are superior to those of the other models.The proposed method reduces the cost and time associated with LCF experiments and accurately establishes the relationship between fatigue characteristics and LCF life of nickel-based superalloys. 展开更多
关键词 Nickel-based superalloy Low-cycle fatigue(LCF) Fatigue life prediction data augmentation method Machine learning model Variational autoencoder(VAE)
原文传递
A Surfing Concurrence Transaction Model for Key-Value NoSQL Databases
17
作者 Changqing Li Jianhua Gu 《Journal of Software Engineering and Applications》 2018年第10期467-485,共19页
As more and more application systems related to big data were developed, NoSQL (Not Only SQL) database systems are becoming more and more popular. In order to add transaction features for some NoSQL database systems, ... As more and more application systems related to big data were developed, NoSQL (Not Only SQL) database systems are becoming more and more popular. In order to add transaction features for some NoSQL database systems, many scholars have tried different techniques. Unfortunately, there is a lack of research on Redis’s transaction in the existing literatures. This paper proposes a transaction model for key-value NoSQL databases including Redis to make possible allowing users to access data in the ACID (Atomicity, Consistency, Isolation and Durability) way, and this model is vividly called the surfing concurrence transaction model. The architecture, important features and implementation principle are described in detail. The key algorithms also were given in the form of pseudo program code, and the performance also was evaluated. With the proposed model, the transactions of Key-Value NoSQL databases can be performed in a lock free and MVCC (Multi-Version Concurrency Control) free manner. This is the result of further research on the related topic, which fills the gap ignored by relevant scholars in this field to make a little contribution to the further development of NoSQL technology. 展开更多
关键词 NOSQL Big data SURFING CONCURRENCE TRANSACTION model key-value NOSQL databases REDIS
暂未订购
Bayesian model averaging(BMA)for nuclear data evaluation 被引量:2
18
作者 E.Alhassan D.Rochman +1 位作者 G.Schnabel A.J.Koning 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第11期193-218,共26页
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s... To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation. 展开更多
关键词 Bayesian model averaging(BMA) Nuclear data Nuclear reaction models model parameters TALYS code system Covariances
在线阅读 下载PDF
Heterogeneous data-driven aerodynamic modeling based on physical feature embedding 被引量:2
19
作者 Weiwei ZHANG Xuhao PENG +1 位作者 Jiaqing KOU Xu WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2024年第3期1-6,共6页
Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full ... Aerodynamic surrogate modeling mostly relies only on integrated loads data obtained from simulation or experiment,while neglecting and wasting the valuable distributed physical information on the surface.To make full use of both integrated and distributed loads,a modeling paradigm,called the heterogeneous data-driven aerodynamic modeling,is presented.The essential concept is to incorporate the physical information of distributed loads as additional constraints within the end-to-end aerodynamic modeling.Towards heterogenous data,a novel and easily applicable physical feature embedding modeling framework is designed.This framework extracts lowdimensional physical features from pressure distribution and then effectively enhances the modeling of the integrated loads via feature embedding.The proposed framework can be coupled with multiple feature extraction methods,and the well-performed generalization capabilities over different airfoils are verified through a transonic case.Compared with traditional direct modeling,the proposed framework can reduce testing errors by almost 50%.Given the same prediction accuracy,it can save more than half of the training samples.Furthermore,the visualization analysis has revealed a significant correlation between the discovered low-dimensional physical features and the heterogeneous aerodynamic loads,which shows the interpretability and credibility of the superior performance offered by the proposed deep learning framework. 展开更多
关键词 Transonic flow data-driven modeling Feature embedding Heterogenous data Feature visualization
原文传递
A missing data processing method for dam deformation monitoring data using spatiotemporal clustering and support vector machine model 被引量:1
20
作者 Yan-tao Zhu Chong-shi Gu Mihai A.Diaconeasa 《Water Science and Engineering》 CSCD 2024年第4期417-424,共8页
Deformation monitoring is a critical measure for intuitively reflecting the operational behavior of a dam.However,the deformation monitoring data are often incomplete due to environmental changes,monitoring instrument... Deformation monitoring is a critical measure for intuitively reflecting the operational behavior of a dam.However,the deformation monitoring data are often incomplete due to environmental changes,monitoring instrument faults,and human operational errors,thereby often hindering the accurate assessment of actual deformation patterns.This study proposed a method for quantifying deformation similarity between measurement points by recognizing the spatiotemporal characteristics of concrete dam deformation monitoring data.It introduces a spatiotemporal clustering analysis of the concrete dam deformation behavior and employs the support vector machine model to address the missing data in concrete dam deformation monitoring.The proposed method was validated in a concrete dam project,with the model error maintaining within 5%,demonstrating its effectiveness in processing missing deformation data.This approach enhances the capability of early-warning systems and contributes to enhanced dam safety management. 展开更多
关键词 Missing data recovery Concrete dam Deformation monitoring Spatiotemporal clustering Support vector machine model
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部