Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa...Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep...Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.展开更多
BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR...BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations.展开更多
Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML...Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.展开更多
As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigat...As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype...This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram...BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram(ECG)during resting conditions and physical exertion in individuals diagnosed with IHD and those without the condition using vasodilator-induced stress computed tomography(CT)myocardial perfusion imaging as the diagnostic reference standard.METHODS This single center observational study included 80 participants.The participants were aged≥40 years and given an informed written consent to participate in the study.Both groups,G1(n=31)with and G2(n=49)without post stress induced myocardial perfusion defect,passed cardiologist consultation,anthropometric measurements,blood pressure and pulse rate measurement,echocardiography,cardio-ankle vascular index,bicycle ergometry,recording 3-min single-lead ECG(Cardio-Qvark)before and just after bicycle ergometry followed by performing CT myocardial perfusion.The LASSO regression with nested cross-validation was used to find the association between Cardio-Qvark parameters and the existence of the perfusion defect.Statistical processing was performed with the R programming language v4.2,Python v.3.10[^R],and Statistica 12 program.RESULTS Bicycle ergometry yielded an area under the receiver operating characteristic curve of 50.7%[95%confidence interval(CI):0.388-0.625],specificity of 53.1%(95%CI:0.392-0.673),and sensitivity of 48.4%(95%CI:0.306-0.657).In contrast,the Cardio-Qvark test performed notably better with an area under the receiver operating characteristic curve of 67%(95%CI:0.530-0.801),specificity of 75.5%(95%CI:0.628-0.88),and sensitivity of 51.6%(95%CI:0.333-0.695).CONCLUSION The single-lead ECG has a relatively higher diagnostic accuracy compared with bicycle ergometry by using machine learning models,but the difference was not statistically significant.However,further investigations are required to uncover the hidden capabilities of single-lead ECG in IHD diagnosis.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’...Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.展开更多
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
Geared-rotor systems are critical components in mechanical applications,and their performance can be severely affected by faults,such as profile errors,wear,pitting,spalling,flaking,and cracks.Profile errors in gear t...Geared-rotor systems are critical components in mechanical applications,and their performance can be severely affected by faults,such as profile errors,wear,pitting,spalling,flaking,and cracks.Profile errors in gear teeth are inevitable in manufacturing and subsequently accumulate during operations.This work aims to predict the status of gear profile deviations based on gear dynamics response using the digital model of an experimental rig setup.The digital model comprises detailed CAD models and has been validated against the expected physical behavior using commercial finite element analysis software.The different profile deviations are then modeled using gear charts,and the dynamic response is captured through simulations.The various features are then obtained by signal processing,and various ML models are then evaluated to predict the fault/no-fault condition for the gear.The best performance is achieved by an artificial neural network with a prediction accuracy of 97.5%,which concludes a strong influence on the dynamics of the gear rotor system due to profile deviations.展开更多
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinde...Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.展开更多
Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological...Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.展开更多
Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared...Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings.展开更多
Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leve...Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leveraging complex ML models,such as those developed using field inversion and machine learning(FIML),to dynamically adjust the constants within the baseline RANS model.However,the ML models often overlook the fundamental calibrations of the RANS turbulence model.Consequently,the basic calibration of the baseline RANS model is disrupted,leading to a degradation in the accuracy,particularly in basic wall-attached flows outside of the training set.To address this issue,a modified version of the Spalart-Allmaras(SA)turbulence model,known as Rubber-band SA(RBSA),has been proposed recently.This modification involves identifying and embedding constraints related to basic wall-attached flows directly into the model.It is shown that no matter how the parameters of the RBSA model are adjusted as constants throughout the flow field,its accuracy in wall-attached flows remains unaffected.In this paper,we propose a new constraint for the RBSA model,which better safeguards the law of wall in extreme conditions where the model parameter is adjusted dramatically.The resultant model is called the RBSA-poly model.We then show that when combined with FIML augmentation,the RBSA-poly model effectively preserves the accuracy of simple wall-attached flows,even when the adjusted parameters become functions of local flow variables rather than constants.A comparative analysis with the FIML-augmented original SA model reveals that the augmented RBSA-poly model reduces error in basic wall-attached flows by 50%while maintaining comparable accuracy in trained separated flows.These findings confirm the effectiveness of utilizing FIML in conjunction with the RBSA model,offering superior accuracy retention in cardinal flows.展开更多
基金financial support of the National Natural Science Foundation of China(No.52371103)the Fundamental Research Funds for the Central Universities,China(No.2242023K40028)+1 种基金the Open Research Fund of Jiangsu Key Laboratory for Advanced Metallic Materials,China(No.AMM2023B01).financial support of the Research Fund of Shihezi Key Laboratory of AluminumBased Advanced Materials,China(No.2023PT02)financial support of Guangdong Province Science and Technology Major Project,China(No.2021B0301030005)。
文摘Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金supported by the National Natural Science Foundation of China[grant number 62376217]the Young Elite Scientists Sponsorship Program by CAST[grant number 2023QNRC001]the Joint Research Project for Meteorological Capacity Improvement[grant number 24NLTSZ003]。
文摘Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance.
文摘BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations.
文摘Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies.
基金supported by the National Natural Science Foundation of China(22379021 and 22479021)。
文摘As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
基金supported by the National Natural Science Foundation of China(12072090).
文摘This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金Supported by Government Assignment,No.1023022600020-6RSF Grant,No.24-15-00549Ministry of Science and Higher Education of the Russian Federation within the Framework of State Support for the Creation and Development of World-Class Research Center,No.075-15-2022-304.
文摘BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram(ECG)during resting conditions and physical exertion in individuals diagnosed with IHD and those without the condition using vasodilator-induced stress computed tomography(CT)myocardial perfusion imaging as the diagnostic reference standard.METHODS This single center observational study included 80 participants.The participants were aged≥40 years and given an informed written consent to participate in the study.Both groups,G1(n=31)with and G2(n=49)without post stress induced myocardial perfusion defect,passed cardiologist consultation,anthropometric measurements,blood pressure and pulse rate measurement,echocardiography,cardio-ankle vascular index,bicycle ergometry,recording 3-min single-lead ECG(Cardio-Qvark)before and just after bicycle ergometry followed by performing CT myocardial perfusion.The LASSO regression with nested cross-validation was used to find the association between Cardio-Qvark parameters and the existence of the perfusion defect.Statistical processing was performed with the R programming language v4.2,Python v.3.10[^R],and Statistica 12 program.RESULTS Bicycle ergometry yielded an area under the receiver operating characteristic curve of 50.7%[95%confidence interval(CI):0.388-0.625],specificity of 53.1%(95%CI:0.392-0.673),and sensitivity of 48.4%(95%CI:0.306-0.657).In contrast,the Cardio-Qvark test performed notably better with an area under the receiver operating characteristic curve of 67%(95%CI:0.530-0.801),specificity of 75.5%(95%CI:0.628-0.88),and sensitivity of 51.6%(95%CI:0.333-0.695).CONCLUSION The single-lead ECG has a relatively higher diagnostic accuracy compared with bicycle ergometry by using machine learning models,but the difference was not statistically significant.However,further investigations are required to uncover the hidden capabilities of single-lead ECG in IHD diagnosis.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
文摘Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
文摘Geared-rotor systems are critical components in mechanical applications,and their performance can be severely affected by faults,such as profile errors,wear,pitting,spalling,flaking,and cracks.Profile errors in gear teeth are inevitable in manufacturing and subsequently accumulate during operations.This work aims to predict the status of gear profile deviations based on gear dynamics response using the digital model of an experimental rig setup.The digital model comprises detailed CAD models and has been validated against the expected physical behavior using commercial finite element analysis software.The different profile deviations are then modeled using gear charts,and the dynamic response is captured through simulations.The various features are then obtained by signal processing,and various ML models are then evaluated to predict the fault/no-fault condition for the gear.The best performance is achieved by an artificial neural network with a prediction accuracy of 97.5%,which concludes a strong influence on the dynamics of the gear rotor system due to profile deviations.
基金supported by the National Key Research and Development Program of China(Grant No.2024YFA1408604 for K.C.and X.C.)the National Natural Science Foundation of China(Grant Nos.12047503,12447103 for K.C.and X.C.,12325501 for P.Z.,and 12275263 for Y.D.and S.H.)+1 种基金the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301900 for Y.D.and S.H.)the Natural Science Foundation of Fujian Province of China(Grant No.2023J02032 for Y.D.and S.H.)。
文摘Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics.
基金Project(2024JJ2074) supported by the Natural Science Foundation of Hunan Province,ChinaProject(22376221) supported by the National Natural Science Foundation of ChinaProject(2023QNRC001) supported by the Young Elite Scientists Sponsorship Program by CAST,China。
文摘Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.
基金funded by Beijing Natural Science Foundation-Haidian Original Innovation Joint Fund(Grant No.L222103)the National Natural Science Foundation of China(Grant No.72174012)。
文摘Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings.
基金supported by the National Natural Science Foundation of China(Grant Nos.12388101,12372288,U23A2069,and 92152301).
文摘Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leveraging complex ML models,such as those developed using field inversion and machine learning(FIML),to dynamically adjust the constants within the baseline RANS model.However,the ML models often overlook the fundamental calibrations of the RANS turbulence model.Consequently,the basic calibration of the baseline RANS model is disrupted,leading to a degradation in the accuracy,particularly in basic wall-attached flows outside of the training set.To address this issue,a modified version of the Spalart-Allmaras(SA)turbulence model,known as Rubber-band SA(RBSA),has been proposed recently.This modification involves identifying and embedding constraints related to basic wall-attached flows directly into the model.It is shown that no matter how the parameters of the RBSA model are adjusted as constants throughout the flow field,its accuracy in wall-attached flows remains unaffected.In this paper,we propose a new constraint for the RBSA model,which better safeguards the law of wall in extreme conditions where the model parameter is adjusted dramatically.The resultant model is called the RBSA-poly model.We then show that when combined with FIML augmentation,the RBSA-poly model effectively preserves the accuracy of simple wall-attached flows,even when the adjusted parameters become functions of local flow variables rather than constants.A comparative analysis with the FIML-augmented original SA model reveals that the augmented RBSA-poly model reduces error in basic wall-attached flows by 50%while maintaining comparable accuracy in trained separated flows.These findings confirm the effectiveness of utilizing FIML in conjunction with the RBSA model,offering superior accuracy retention in cardinal flows.