期刊文献+
共找到31,187篇文章
< 1 2 250 >
每页显示 20 50 100
Influence of different data selection criteria on internal geomagnetic field modeling 被引量:4
1
作者 HongBo Yao JuYuan Xu +3 位作者 Yi Jiang Qing Yan Liang Yin PengFei Liu 《Earth and Planetary Physics》 2025年第3期541-549,共9页
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i... Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications. 展开更多
关键词 Macao Science Satellite-1 SWARM geomagnetic field modeling data selection core field crustal field
在线阅读 下载PDF
A Diffusion Model for Traffic Data Imputation 被引量:1
2
作者 Bo Lu Qinghai Miao +5 位作者 Yahui Liu Tariku Sinshaw Tamir Hongxia Zhao Xiqiao Zhang Yisheng Lv Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第3期606-617,共12页
Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has prov... Imputation of missing data has long been an important topic and an essential application for intelligent transportation systems(ITS)in the real world.As a state-of-the-art generative model,the diffusion model has proven highly successful in image generation,speech generation,time series modelling etc.and now opens a new avenue for traffic data imputation.In this paper,we propose a conditional diffusion model,called the implicit-explicit diffusion model,for traffic data imputation.This model exploits both the implicit and explicit feature of the data simultaneously.More specifically,we design two types of feature extraction modules,one to capture the implicit dependencies hidden in the raw data at multiple time scales and the other to obtain the long-term temporal dependencies of the time series.This approach not only inherits the advantages of the diffusion model for estimating missing data,but also takes into account the multiscale correlation inherent in traffic data.To illustrate the performance of the model,extensive experiments are conducted on three real-world time series datasets using different missing rates.The experimental results demonstrate that the model improves imputation accuracy and generalization capability. 展开更多
关键词 data imputation diffusion model implicit feature time series traffic data
在线阅读 下载PDF
Smart cities,smart systems:A comprehensive review of system dynamics model applications in urban studies in the big data era 被引量:2
3
作者 Gift Fabolude Charles Knoble +1 位作者 Anvy Vu Danlin Yu 《Geography and Sustainability》 2025年第1期25-36,共12页
This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models ... This paper addresses urban sustainability challenges amid global urbanization, emphasizing the need for innova tive approaches aligned with the Sustainable Development Goals. While traditional tools and linear models offer insights, they fall short in presenting a holistic view of complex urban challenges. System dynamics (SD) models that are often utilized to provide holistic, systematic understanding of a research subject, like the urban system, emerge as valuable tools, but data scarcity and theoretical inadequacy pose challenges. The research reviews relevant papers on recent SD model applications in urban sustainability since 2018, categorizing them based on nine key indicators. Among the reviewed papers, data limitations and model assumptions were identified as ma jor challenges in applying SD models to urban sustainability. This led to exploring the transformative potential of big data analytics, a rare approach in this field as identified by this study, to enhance SD models’ empirical foundation. Integrating big data could provide data-driven calibration, potentially improving predictive accuracy and reducing reliance on simplified assumptions. The paper concludes by advocating for new approaches that reduce assumptions and promote real-time applicable models, contributing to a comprehensive understanding of urban sustainability through the synergy of big data and SD models. 展开更多
关键词 Urban sustainability Smart cities System dynamics models Big data analytics Urban system complexity data-driven urbanism
在线阅读 下载PDF
Design of a Private Cloud Platform for Distributed Logging Big Data Based on a Unified Learning Model of Physics and Data 被引量:1
4
作者 Cheng Xi Fu Haicheng Tursyngazy Mahabbat 《Applied Geophysics》 2025年第2期499-510,560,共13页
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th... Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity. 展开更多
关键词 Unified logging learning model logging big data private cloud machine learning
在线阅读 下载PDF
Designing a Comprehensive Data Governance Maturity Model for Kenya Ministry of Defence
5
作者 Gilly Gitahi Gathogo Simon Maina Karume Josphat Karani 《Journal of Information Security》 2025年第1期44-69,共26页
The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific req... The study aimed to develop a customized Data Governance Maturity Model (DGMM) for the Ministry of Defence (MoD) in Kenya to address data governance challenges in military settings. Current frameworks lack specific requirements for the defence industry. The model uses Key Performance Indicators (KPIs) to enhance data governance procedures. Design Science Research guided the study, using qualitative and quantitative methods to gather data from MoD personnel. Major deficiencies were found in data integration, quality control, and adherence to data security regulations. The DGMM helps the MOD improve personnel, procedures, technology, and organizational elements related to data management. The model was tested against ISO/IEC 38500 and recommended for use in other government sectors with similar data governance issues. The DGMM has the potential to enhance data management efficiency, security, and compliance in the MOD and guide further research in military data governance. 展开更多
关键词 data Governance Maturity model Maturity Index Kenya Ministry of Defence Key Performance Indicators data Security Regulations
在线阅读 下载PDF
Gene Expression Data Analysis Based on Mixed Effects Model
6
作者 Yuanbo Dai 《Journal of Computer and Communications》 2025年第2期223-235,共13页
DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expres... DNA microarray technology is an extremely effective technique for studying gene expression patterns in cells, and the main challenge currently faced by this technology is how to analyze the large amount of gene expression data generated. To address this, this paper employs a mixed-effects model to analyze gene expression data. In terms of data selection, 1176 genes from the white mouse gene expression dataset under two experimental conditions were chosen, setting up two conditions: pneumococcal infection and no infection, and constructing a mixed-effects model. After preprocessing the gene chip information, the data were imported into the model, preliminary results were calculated, and permutation tests were performed to biologically validate the preliminary results using GSEA. The final dataset consists of 20 groups of gene expression data from pneumococcal infection, which categorizes functionally related genes based on the similarity of their expression profiles, facilitating the study of genes with unknown functions. 展开更多
关键词 Mixed Effects model Gene Expression data Analysis Gene Analysis Gene Chip
暂未订购
Development and validation of a stroke risk prediction model using regional healthcare big data and machine learning
7
作者 Yunxia Duan Rui Wang +6 位作者 Yumei Sun Wendi Zhu Yi Li Na Yu Yu Zhu Peng Shen Hongyu Sun 《International Journal of Nursing Sciences》 2025年第6期558-565,I0002,共9页
Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared... Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings. 展开更多
关键词 Big data Machine learning NURSING Prediction model STROKE
暂未订购
Modeling and Performance Evaluation of Streaming Data Processing System in IoT Architecture
8
作者 Feng Zhu Kailin Wu Jie Ding 《Computers, Materials & Continua》 2025年第5期2573-2598,共26页
With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Alth... With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads. 展开更多
关键词 System modeling performance evaluation streaming data process IoT system PEPA
在线阅读 下载PDF
Data Gathering Based on Hybrid Energy Efficient Clustering Algorithm and DCRNN Model in Wireless Sensor Network
9
作者 Li Cuiran Liu Shuqi +1 位作者 Xie Jianli Liu Li 《China Communications》 2025年第3期115-131,共17页
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu... In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay. 展开更多
关键词 CLUSTERING data gathering DCRNN model network lifetime wireless sensor network
在线阅读 下载PDF
A deep residual intelligent model for ENSO prediction by incorporating coupled model forecast data
10
作者 Chunyang Song Xuefeng Zhang +3 位作者 Xingrong Chen Hua Jiang Liang Zhang Yongyong Huang 《Acta Oceanologica Sinica》 2025年第8期133-142,共10页
The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes... The El Niño-Southern Oscillation(ENSO)is a naturally recurring interannual climate fluctuation that affects the global climate system.The advent of deep learning-based approaches has led to transformative changes in ENSO forecasts,resulting in significant progress.Most deep learning-based ENSO prediction models which primarily rely solely on reanalysis data may lead to challenges in intensity underestimation in long-term forecasts,reducing the forecasting skills.To this end,we propose a deep residual-coupled model prediction(Res-CMP)model,which integrates historical reanalysis data and coupled model forecast data for multiyear ENSO prediction.The Res-CMP model is designed as a lightweight model that leverages only short-term reanalysis data and nudging assimilation prediction results of the Community Earth System Model(CESM)for effective prediction of the Niño 3.4 index.We also developed a transfer learning strategy for this model to overcome the limitations of inadequate forecast data.After determining the optimal configuration,which included selecting a suitable transfer learning rate during training,along with input variables and CESM forecast lengths,Res-CMP demonstrated a high correlation ability for 19-month lead time predictions(correlation coefficients exceeding 0.5).The Res-CMP model also alleviated the spring predictability barrier(SPB).When validated against actual ENSO events,Res-CMP successfully captured the temporal evolution of the Niño 3.4 index during La Niña events(1998/99 and 2020/21)and El Niño events(2009/10 and 2015/16).Our proposed model has the potential to further enhance ENSO prediction performance by using coupled models to assist deep learning methods. 展开更多
关键词 ENSO prediction deep learning dynamical coupled model data incorporating
在线阅读 下载PDF
Controlling update distance and enhancing fair trainable prototypes in federated learning under data and model heterogeneity
11
作者 Kangning Yin Zhen Ding +1 位作者 Xinhui Ji Zhiguo Wang 《Defence Technology(防务技术)》 2025年第5期15-31,共17页
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t... Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios. 展开更多
关键词 Heterogeneous federated learning model heterogeneity data heterogeneity Contrastive learning
在线阅读 下载PDF
Research on the Evaluation Model of Software Talent Cultivation Based on Multivariant Data Fusion
12
作者 Yin Chen Haoxuan Tang +4 位作者 Lei Zhang Tonghua Su Zhongjie Wang Ruihan Hu Shanli Xie 《计算机教育》 2025年第3期130-137,共8页
This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from ... This paper proposes a multivariate data fusion based quality evaluation model for software talent cultivation.The model constructs a comprehensive ability and quality evaluation index system for college students from a perspective of engineering course,especially of software engineering.As for evaluation method,relying on the behavioral data of students during their school years,we aim to construct the evaluation model as objective as possible,effectively weakening the negative impact of personal subjective assumptions on the evaluation results. 展开更多
关键词 Quality evaluation model Software talent cultivation Behavioral data
在线阅读 下载PDF
Generating high-resolution climate data in the Andes using artificial intelligence:A lightweight alternative to the WRF model
13
作者 Christian Carhuancho Edwin Villanueva +2 位作者 Christian Yarleque Romel Erick Principe Marcia Castromonte 《Artificial Intelligence in Geosciences》 2025年第2期86-100,共15页
In weather forecasting,generating atmospheric variables for regions with complex topography,such as the Andean regions with peaks reaching 6500 m above sea level,poses significant challenges.Traditional regional clima... In weather forecasting,generating atmospheric variables for regions with complex topography,such as the Andean regions with peaks reaching 6500 m above sea level,poses significant challenges.Traditional regional climate models often struggle to accurately represent the atmospheric behavior in such areas.Furthermore,the capability to produce high spatio-temporal resolution data(less than 27 km and hourly)is limited to a few institutions globally due to the substantial computational resources required.This study presents the results of atmospheric data generated using a new type of artificial intelligence(AI)models,aimed to reduce the computational cost of generating downscaled climate data using climate regional models like the Weather Research and Forecasting(WRF)model over the Andes.The WRF model was selected for this comparison due to its frequent use in simulating atmospheric variables in the Andes.Our results demonstrate a higher downscaling performance for the four target weather variables studied(temperature,relative humidity,zonal and meridional wind)over coastal,mountain,and jungle regions.Moreover,this AI model offers several advantages,including lower computational costs compared to dynamic models like WRF and continuous improvement potential with additional training data. 展开更多
关键词 Andean regions Atmospheric variables Regional climate models Weather Research Forecasting(WRF) Artificial intelligence(AI) Computational cost Deep learning models RNN models Climate data generation
在线阅读 下载PDF
Efficient deep-learning-based surrogate model for reservoir production optimization using transfer learning and multi-fidelity data
14
作者 Jia-Wei Cui Wen-Yue Sun +2 位作者 Hoonyoung Jeong Jun-Rong Liu Wen-Xin Zhou 《Petroleum Science》 2025年第4期1736-1756,共21页
In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However... In the realm of subsurface flow simulations,deep-learning-based surrogate models have emerged as a promising alternative to traditional simulation methods,especially in addressing complex optimization problems.However,a significant challenge lies in the necessity of numerous high-fidelity training simulations to construct these deep-learning models,which limits their application to field-scale problems.To overcome this limitation,we introduce a training procedure that leverages transfer learning with multi-fidelity training data to construct surrogate models efficiently.The procedure begins with the pre-training of the surrogate model using a relatively larger amount of data that can be efficiently generated from upscaled coarse-scale models.Subsequently,the model parameters are finetuned with a much smaller set of high-fidelity simulation data.For the cases considered in this study,this method leads to about a 75%reduction in total computational cost,in comparison with the traditional training approach,without any sacrifice of prediction accuracy.In addition,a dedicated well-control embedding model is introduced to the traditional U-Net architecture to improve the surrogate model's prediction accuracy,which is shown to be particularly effective when dealing with large-scale reservoir models under time-varying well control parameters.Comprehensive results and analyses are presented for the prediction of well rates,pressure and saturation states of a 3D synthetic reservoir system.Finally,the proposed procedure is applied to a field-scale production optimization problem.The trained surrogate model is shown to provide excellent generalization capabilities during the optimization process,in which the final optimized net-present-value is much higher than those from the training data ranges. 展开更多
关键词 Subsurface flow simulation Surrogate model Transfer learning Multi-fidelity training data Production optimization
原文传递
Employment of an Arctic sea-ice data assimilation scheme in the coupled climate system model FGOALS-f3-L and its preliminary results
15
作者 Yuyang Guo Yongqiang Yu Jiping Liu 《Atmospheric and Oceanic Science Letters》 2025年第4期27-34,共8页
Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this st... Arctic sea ice is an important component of the global climate system and has experienced rapid changes during in the past few decades,the prediction of which is a significant application for climate models.In this study,a Localized Error Subspace Transform Kalman Filter is employed in a coupled climate system model(the Flexible Global Ocean–Atmosphere–Land System Model,version f3-L(FGOALS-f3-L))to assimilate sea-ice concentration(SIC)and sea-ice thickness(SIT)data for melting-season ice predictions.The scheme is applied through the following steps:(1)initialization for generating initial ensembles;(2)analysis for assimilating observed data;(3)adoption for dividing ice states into five thickness categories;(4)forecast for evolving the model;(5)resampling for updating model uncertainties.Several experiments were conducted to examine its results and impacts.Compared with the control experiment,the continuous assimilation experiments(CTNs)indicate assimilations improve model SICs and SITs persistently and generate realistic initials.Assimilating SIC+SIT data better corrects overestimated model SITs spatially than when only assimilating SIC data.The continuous assimilation restart experiments indicate the initials from the CTNs correct the overestimated marginal SICs and overall SITs remarkably well,as well as the cold biases in the oceanic and atmospheric models.The initials with SIC+SIT assimilated show more reasonable spatial improvements.Nevertheless,the SICs in the central Arctic undergo abnormal summer reductions,which is probably because overestimated SITs are reduced in the initials but the strong seasonal cycle(summer melting)biases are unchanged.Therefore,since systematic biases are complicated in a coupled system,for FGOALS-f3-L to make better ice predictions,oceanic and atmospheric assimilations are expected required. 展开更多
关键词 Arctic sea ice data assimilation Coupled climate system model FGOALS-f3-L
在线阅读 下载PDF
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
16
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm data intervals clustering Feature selection Application of artificial intelligence in distillation industry data-driven modelling
在线阅读 下载PDF
Data-Enhanced Low-Cycle Fatigue Life Prediction Model Based on Nickel-Based Superalloys
17
作者 Luopeng Xu Lei Xiong +5 位作者 Rulun Zhang Jiajun Zheng Huawei Zou Zhixin Li Xiaopeng Wang Qingyuan Wang 《Acta Mechanica Solida Sinica》 2025年第4期612-623,共12页
To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation me... To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation method.This method utilizes a variational autoencoder(VAE)to generate low-cycle fatigue data and form an augmented dataset.The Pearson correlation coefficient(PCC)is employed to verify the similarity of feature distributions between the original and augmented datasets.Six machine learning models,namely random forest(RF),artificial neural network(ANN),support vector machine(SVM),gradient-boosted decision tree(GBDT),eXtreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost),are utilized to predict the LCF life of nickel-based superalloys.Results indicate that the proposed data augmentation method based on VAE can effectively expand the dataset,and the mean absolute error(MAE),root mean square error(RMSE),and R-squared(R^(2))values achieved using the CatBoost model,with respective values of 0.0242,0.0391,and 0.9538,are superior to those of the other models.The proposed method reduces the cost and time associated with LCF experiments and accurately establishes the relationship between fatigue characteristics and LCF life of nickel-based superalloys. 展开更多
关键词 Nickel-based superalloy Low-cycle fatigue(LCF) Fatigue life prediction data augmentation method Machine learning model Variational autoencoder(VAE)
原文传递
Construction of the sea surface wind field of Typhoon Chaba based on wind field model and CMEMS data
18
作者 Zijing OU Tianyu ZHANG +5 位作者 Danchen YAN Yulin WANG Junping ZHANG Hao NING Cheng CHI Lengjian CHEN 《Journal of Oceanology and Limnology》 2025年第6期1754-1768,共15页
Typhoon Chaba was the most intense typhoon to strike western Guangdong since Typhoon Mujigae in 2015.According to the National Disaster Reduction Center of China,in the morning of July 7,2022,over 1.5 million people i... Typhoon Chaba was the most intense typhoon to strike western Guangdong since Typhoon Mujigae in 2015.According to the National Disaster Reduction Center of China,in the morning of July 7,2022,over 1.5 million people in Guangdong,Guangxi,and Hainan were affected by Typhoon Chaba.The typhoon also caused the“Fukui 001”ship to be in distress in the waters near Yangjiang,Guangdong,on July 2,resulting in big casualties.Studies have indicated that wind field forecast for Typhoon Chaba was not accurate.To better simulate typhoon events and assess their impacts,we proposed the use of a model wind field(Fujita-Takahashi)integrated with the Copernicus Marine and Environmental Monitoring Service(CMEMS)data to reconstruct effectively the overall wind field of Typhoon Chaba.The simulation result aligns well with the observations,particularly at the Dashu Island Station,showing consistent trends in wind speed changes.However,certain limitations were noted.The model shows that the attenuation of wind speed is slower when typhoon neared land than that observed,indicating that the model has a high simulation accuracy for the ocean wind field,but may have deviations near coastal areas.The result is accurate for open sea but deviated for near land due to the land friction effect.Therefore,we recommend to adjust the model to improve the accuracy for near coasts. 展开更多
关键词 typhoon sea surface wind field Typhoon Chaba fusion wind field model Copernicus Marine and Environmental Monitoring Service(CMEMS)wind field data
在线阅读 下载PDF
Do Higher Horizontal Resolution Models Perform Better?
19
作者 Shoji KUSUNOKI 《Advances in Atmospheric Sciences》 2026年第1期259-262,共4页
Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(... Climate model prediction has been improved by enhancing model resolution as well as the implementation of sophisticated physical parameterization and refinement of data assimilation systems[section 6.1 in Wang et al.(2025)].In relation to seasonal forecasting and climate projection in the East Asian summer monsoon season,proper simulation of the seasonal migration of rain bands by models is a challenging and limiting factor[section 7.1 in Wang et al.(2025)]. 展开更多
关键词 enhancing model resolution refinement data assimilation systems section climate model climate projection higher horizontal resolution seasonal forecasting simulation seasonal migration rain bands model resolution
在线阅读 下载PDF
AI-driven integration of multi-omics and multimodal data for precision medicine
20
作者 Heng-Rui Liu 《Medical Data Mining》 2026年第1期1-2,共2页
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ... High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1). 展开更多
关键词 high throughput transcriptomics multi omics single cell multimodal learning frameworks foundation models omics data modalitiesemerging ai driven precision medicine
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部