期刊文献+
共找到20,128篇文章
< 1 2 250 >
每页显示 20 50 100
Processing map for oxide dispersion strengthening Cu alloys based on experimental results and machine learning modelling
1
作者 Le Zong Lingxin Li +8 位作者 Lantian Zhang Xuecheng Jin Yong Zhang Wenfeng Yang Pengfei Liu Bin Gan Liujie Xu Yuanshen Qi Wenwen Sun 《International Journal of Minerals,Metallurgy and Materials》 2026年第1期292-305,共14页
Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening pa... Oxide dispersion strengthened(ODS)alloys are extensively used owing to high thermostability and creep strength contributed from uniformly dispersed fine oxides particles.However,the existence of these strengthening particles also deteriorates the processability and it is of great importance to establish accurate processing maps to guide the thermomechanical processes to enhance the formability.In this study,we performed particle swarm optimization-based back propagation artificial neural network model to predict the high temperature flow behavior of 0.25wt%Al2O3 particle-reinforced Cu alloys,and compared the accuracy with that of derived by Arrhenius-type constitutive model and back propagation artificial neural network model.To train these models,we obtained the raw data by fabricating ODS Cu alloys using the internal oxidation and reduction method,and conducting systematic hot compression tests between 400 and800℃with strain rates of 10^(-2)-10 S^(-1).At last,processing maps for ODS Cu alloys were proposed by combining processing parameters,mechanical behavior,microstructure characterization,and the modeling results achieved a coefficient of determination higher than>99%. 展开更多
关键词 oxide dispersion strengthened Cu alloys constitutive model machine learning hot deformation processing maps
在线阅读 下载PDF
Deep Learning for Brain Tumor Segmentation and Classification: A Systematic Review of Methods and Trends
2
作者 Ameer Hamza Robertas Damaševicius 《Computers, Materials & Continua》 2026年第1期132-172,共41页
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20... This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers. 展开更多
关键词 Brain tumor segmentation brain tumor classification deep learning vision transformers hybrid models
在线阅读 下载PDF
A novel deep learning-based framework for forecasting
3
作者 Congqi Cao Ze Sun +2 位作者 Lanshu Hu Liujie Pan Yanning Zhang 《Atmospheric and Oceanic Science Letters》 2026年第1期22-26,共5页
Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep... Deep learning-based methods have become alternatives to traditional numerical weather prediction systems,offering faster computation and the ability to utilize large historical datasets.However,the application of deep learning to medium-range regional weather forecasting with limited data remains a significant challenge.In this work,three key solutions are proposed:(1)motivated by the need to improve model performance in data-scarce regional forecasting scenarios,the authors innovatively apply semantic segmentation models,to better capture spatiotemporal features and improve prediction accuracy;(2)recognizing the challenge of overfitting and the inability of traditional noise-based data augmentation methods to effectively enhance model robustness,a novel learnable Gaussian noise mechanism is introduced that allows the model to adaptively optimize perturbations for different locations,ensuring more effective learning;and(3)to address the issue of error accumulation in autoregressive prediction,as well as the challenge of learning difficulty and the lack of intermediate data utilization in one-shot prediction,the authors propose a cascade prediction approach that effectively resolves these problems while significantly improving model forecasting performance.The method achieves a competitive result in The East China Regional AI Medium Range Weather Forecasting Competition.Ablation experiments further validate the effectiveness of each component,highlighting their contributions to enhancing prediction performance. 展开更多
关键词 Weather forecasting Deep learning Semantic segmentation models learnable Gaussian noise Cascade prediction
在线阅读 下载PDF
Construction and validation of machine learning-based predictive model for colorectal polyp recurrence one year after endoscopic mucosal resection 被引量:2
4
作者 Yi-Heng Shi Jun-Liang Liu +5 位作者 Cong-Cong Cheng Wen-Ling Li Han Sun Xi-Liang Zhou Hong Wei Su-Juan Fei 《World Journal of Gastroenterology》 2025年第11期46-62,共17页
BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR... BACKGROUND Colorectal polyps are precancerous diseases of colorectal cancer.Early detection and resection of colorectal polyps can effectively reduce the mortality of colorectal cancer.Endoscopic mucosal resection(EMR)is a common polypectomy proce-dure in clinical practice,but it has a high postoperative recurrence rate.Currently,there is no predictive model for the recurrence of colorectal polyps after EMR.AIM To construct and validate a machine learning(ML)model for predicting the risk of colorectal polyp recurrence one year after EMR.METHODS This study retrospectively collected data from 1694 patients at three medical centers in Xuzhou.Additionally,a total of 166 patients were collected to form a prospective validation set.Feature variable screening was conducted using uni-variate and multivariate logistic regression analyses,and five ML algorithms were used to construct the predictive models.The optimal models were evaluated based on different performance metrics.Decision curve analysis(DCA)and SHapley Additive exPlanation(SHAP)analysis were performed to assess clinical applicability and predictor importance.RESULTS Multivariate logistic regression analysis identified 8 independent risk factors for colorectal polyp recurrence one year after EMR(P<0.05).Among the models,eXtreme Gradient Boosting(XGBoost)demonstrated the highest area under the curve(AUC)in the training set,internal validation set,and prospective validation set,with AUCs of 0.909(95%CI:0.89-0.92),0.921(95%CI:0.90-0.94),and 0.963(95%CI:0.94-0.99),respectively.DCA indicated favorable clinical utility for the XGBoost model.SHAP analysis identified smoking history,family history,and age as the top three most important predictors in the model.CONCLUSION The XGBoost model has the best predictive performance and can assist clinicians in providing individualized colonoscopy follow-up recommendations. 展开更多
关键词 Colorectal polyps Machine learning Predictive model Risk factors SHapley Additive exPlanation
暂未订购
Application of machine learning in the research progress of postkidney transplant rejection
5
作者 Yun-Peng Guo Quan Wen +2 位作者 Yu-Yang Wang Gai Hang Bo Chen 《World Journal of Transplantation》 2026年第1期129-144,共16页
Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML... Post-kidney transplant rejection is a critical factor influencing transplant success rates and the survival of transplanted organs.With the rapid advancement of artificial intelligence technologies,machine learning(ML)has emerged as a powerful data analysis tool,widely applied in the prediction,diagnosis,and mechanistic study of kidney transplant rejection.This mini-review systematically summarizes the recent applications of ML techniques in post-kidney transplant rejection,covering areas such as the construction of predictive models,identification of biomarkers,analysis of pathological images,assessment of immune cell infiltration,and formulation of personalized treatment strategies.By integrating multi-omics data and clinical information,ML has significantly enhanced the accuracy of early rejection diagnosis and the capability for prognostic evaluation,driving the development of precision medicine in the field of kidney transplantation.Furthermore,this article discusses the challenges faced in existing research and potential future directions,providing a theoretical basis and technical references for related studies. 展开更多
关键词 Machine learning Kidney transplant REJECTION Predictive models Biomarkers Pathological image analysis Immune cell infiltration Precision medicine
暂未订购
TELL-Me:A time-series-decomposition-based ensembled lightweight learning model for diverse battery prognosis and diagnosis 被引量:1
6
作者 Kun-Yu Liu Ting-Ting Wang +2 位作者 Bo-Bo Zou Hong-Jie Peng Xinyan Liu 《Journal of Energy Chemistry》 2025年第7期1-8,共8页
As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigat... As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries. 展开更多
关键词 Battery prognosis Interpretable machine learning Degradation diagnosis Ensemble learning Online prediction Lightweight model
在线阅读 下载PDF
Comparative analysis of machine learning and statistical models for cotton yield prediction in major growing districts of Karnataka,India 被引量:1
7
作者 THIMMEGOWDA M.N. MANJUNATHA M.H. +4 位作者 LINGARAJ H. SOUMYA D.V. JAYARAMAIAH R. SATHISHA G.S. NAGESHA L. 《Journal of Cotton Research》 2025年第1期40-60,共21页
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su... Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies. 展开更多
关键词 COTTON Machine learning models Statistical models Yield forecast Artificial neural network Weather variables
在线阅读 下载PDF
Design of a Private Cloud Platform for Distributed Logging Big Data Based on a Unified Learning Model of Physics and Data 被引量:1
8
作者 Cheng Xi Fu Haicheng Tursyngazy Mahabbat 《Applied Geophysics》 2025年第2期499-510,560,共13页
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th... Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity. 展开更多
关键词 Unified logging learning model logging big data private cloud machine learning
在线阅读 下载PDF
Fault-observer-based iterative learning model predictive controller for trajectory tracking of hypersonic vehicles 被引量:2
9
作者 CUI Peng GAO Changsheng AN Ruoming 《Journal of Systems Engineering and Electronics》 2025年第3期803-813,共11页
This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype... This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller. 展开更多
关键词 hypersonic vehicle actuator fault tracking control iterative learning control(ILC) model predictive control(MPC) fault observer
在线阅读 下载PDF
Comparative analysis of empirical and deep learning models for ionospheric sporadic E layer prediction
10
作者 BingKun Yu PengHao Tian +6 位作者 XiangHui Xue Christopher JScott HaiLun Ye JianFei Wu Wen Yi TingDi Chen XianKang Dou 《Earth and Planetary Physics》 EI CAS 2025年第1期10-19,共10页
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,... Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular. 展开更多
关键词 ionospheric sporadic E layer radio occultation ionosondes numerical model deep learning model artificial intelligence
在线阅读 下载PDF
Development and validation of a machine learning model for diagnosis of ischemic heart disease using single-lead electrocardiogram parameters 被引量:1
11
作者 Basheer Abdullah Marzoog Peter Chomakhidze +11 位作者 Daria Gognieva Artemiy Silantyev Alexander Suvorov Magomed Abdullaev Natalia Mozzhukhina Darya Alexandrovna Filippova Sergey Vladimirovich Kostin Maria Kolpashnikova Natalya Ershova Nikolay Ushakov Dinara Mesitskaya Philipp Kopylov 《World Journal of Cardiology》 2025年第4期76-92,共17页
BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram... BACKGROUND Ischemic heart disease(IHD)impacts the quality of life and has the highest mortality rate of cardiovascular diseases globally.AIM To compare variations in the parameters of the single-lead electrocardiogram(ECG)during resting conditions and physical exertion in individuals diagnosed with IHD and those without the condition using vasodilator-induced stress computed tomography(CT)myocardial perfusion imaging as the diagnostic reference standard.METHODS This single center observational study included 80 participants.The participants were aged≥40 years and given an informed written consent to participate in the study.Both groups,G1(n=31)with and G2(n=49)without post stress induced myocardial perfusion defect,passed cardiologist consultation,anthropometric measurements,blood pressure and pulse rate measurement,echocardiography,cardio-ankle vascular index,bicycle ergometry,recording 3-min single-lead ECG(Cardio-Qvark)before and just after bicycle ergometry followed by performing CT myocardial perfusion.The LASSO regression with nested cross-validation was used to find the association between Cardio-Qvark parameters and the existence of the perfusion defect.Statistical processing was performed with the R programming language v4.2,Python v.3.10[^R],and Statistica 12 program.RESULTS Bicycle ergometry yielded an area under the receiver operating characteristic curve of 50.7%[95%confidence interval(CI):0.388-0.625],specificity of 53.1%(95%CI:0.392-0.673),and sensitivity of 48.4%(95%CI:0.306-0.657).In contrast,the Cardio-Qvark test performed notably better with an area under the receiver operating characteristic curve of 67%(95%CI:0.530-0.801),specificity of 75.5%(95%CI:0.628-0.88),and sensitivity of 51.6%(95%CI:0.333-0.695).CONCLUSION The single-lead ECG has a relatively higher diagnostic accuracy compared with bicycle ergometry by using machine learning models,but the difference was not statistically significant.However,further investigations are required to uncover the hidden capabilities of single-lead ECG in IHD diagnosis. 展开更多
关键词 Ischemic heart disease Single-lead electrocardiography Computed tomography myocardial perfusion Prevention Risk factors Stress test Machine learning model
暂未订购
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
12
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
13
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
Controlling update distance and enhancing fair trainable prototypes in federated learning under data and model heterogeneity
14
作者 Kangning Yin Zhen Ding +1 位作者 Xinhui Ji Zhiguo Wang 《Defence Technology(防务技术)》 2025年第5期15-31,共17页
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t... Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios. 展开更多
关键词 Heterogeneous federated learning model heterogeneity Data heterogeneity Contrastive learning
在线阅读 下载PDF
Development of a Digital Model of a Gear Rotor System for Fault Diagnosis Using the Finite Element Method and Machine Learning
15
作者 Anubhav Srivastava Rajiv Tiwari 《Journal of Dynamics, Monitoring and Diagnostics》 2025年第2期121-136,共16页
Geared-rotor systems are critical components in mechanical applications,and their performance can be severely affected by faults,such as profile errors,wear,pitting,spalling,flaking,and cracks.Profile errors in gear t... Geared-rotor systems are critical components in mechanical applications,and their performance can be severely affected by faults,such as profile errors,wear,pitting,spalling,flaking,and cracks.Profile errors in gear teeth are inevitable in manufacturing and subsequently accumulate during operations.This work aims to predict the status of gear profile deviations based on gear dynamics response using the digital model of an experimental rig setup.The digital model comprises detailed CAD models and has been validated against the expected physical behavior using commercial finite element analysis software.The different profile deviations are then modeled using gear charts,and the dynamic response is captured through simulations.The various features are then obtained by signal processing,and various ML models are then evaluated to predict the fault/no-fault condition for the gear.The best performance is achieved by an artificial neural network with a prediction accuracy of 97.5%,which concludes a strong influence on the dynamics of the gear rotor system due to profile deviations. 展开更多
关键词 digital model finite element modeling gear profile errors geared-rotor system machine learning
在线阅读 下载PDF
Learning-at-Criticality in Large Language Models for Quantum Field Theory and Beyond
16
作者 Xiansheng Cai Sihan Hu +4 位作者 Tao Wang Yuan Huang Pan Zhang Youjin Deng Kun Chen 《Chinese Physics Letters》 2025年第12期7-23,共17页
Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinde... Fundamental physics often confronts complex symbolic problems with few guiding exemplars or established principles.While artificial intelligence(AI)offers promise,its typical need for vast datasets to learn from hinders its use in these information-scarce frontiers.We introduce learning at criticality(LaC),a reinforcement learning scheme that tunes large language models(LLMs)to a sharp learning transition,addressing this information scarcity.At this transition,LLMs achieve peak generalization from minimal data,exemplified by 7-digit base-7 addition-a test of nontrivial arithmetic reasoning.To elucidate this peak,we analyze a minimal concept-network model designed to capture the essence of how LLMs might link tokens.Trained on a single exemplar,this model also undergoes a sharp learning transition.This transition exhibits hallmarks of a second-order phase transition,notably power-law distributed solution path lengths.At this critical point,the system maximizes a“critical thinking pattern”crucial for generalization,enabled by the underlying scale-free exploration.This suggests LLMs reach peak performance by operating at criticality,where such explorative dynamics enable the extraction of underlying operational rules.We demonstrate LaC in quantum field theory:an 8B-parameter LLM,tuned to its critical point by LaC using a few exemplars of symbolic Matsubara sums,solves unseen,higher-order problems,significantly outperforming far larger models.LaC thus leverages critical phenomena,a physical principle,to empower AI for complex,data-sparse challenges in fundamental physics. 展开更多
关键词 artificial intelligence ai offers learning criticality lac symbolic problems large language models llms reinforcement learning large language models fundamental physics minimal dat
原文传递
Machine learning model comparison and ensemble for predicting different morphological fractions of heavy metal elements in tailings and mine waste
17
作者 FENG Yu-xin HU Tao +4 位作者 ZHOU Na-na ZHOU Min BARKHORDARI Mohammad Sadegh LI Ke-chao QI Chong-chong 《Journal of Central South University》 2025年第9期3557-3573,共17页
Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological... Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs. 展开更多
关键词 tailings and mine waste morphological fractions model comparison machine learning model ensemble
在线阅读 下载PDF
Development and validation of a stroke risk prediction model using regional healthcare big data and machine learning
18
作者 Yunxia Duan Rui Wang +6 位作者 Yumei Sun Wendi Zhu Yi Li Na Yu Yu Zhu Peng Shen Hongyu Sun 《International Journal of Nursing Sciences》 2025年第6期558-565,I0002,共9页
Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared... Objectives:This study aimed to develop and validate a stroke risk prediction model based on machine learning(ML)and regional healthcare big data,and determine whether it may improve the prediction performance compared with the conventional Logistic Regression(LR)model.Methods:This retrospective cohort study analyzed data from the CHinese Electronic health Records Research in Yinzhou(CHERRY)(2015–2021).We included adults aged 18–75 from the platform who had established records before 2015.Individuals with pre-existing stroke,key data absence,or excessive missingness(>30%)were excluded.Data on demographic,clinical measures,lifestyle factors,comorbidities,and family history of stroke were collected.Variable selection was performed in two stages:an initial screening via univariate analysis,followed by a prioritization of variables based on clinical relevance and actionability,with a focus on those that are modifiable.Stroke prediction models were developed using LR and four ML algorithms:Decision Tree(DT),Random Forest(RF),eXtreme Gradient Boosting(XGBoost),and Back Propagation Neural Network(BPNN).The dataset was split 7:3 for training and validation sets.Performance was assessed using receiver operating characteristic(ROC)curves,calibration,and confusion matrices,and the cutoff value was determined by Youden's index to classify risk groups.Results:The study cohort comprised 92,172 participants with 436 incident stroke cases(incidence rate:474/100,000 person-years).Ultimately,13 predictor variables were included.RF achieved the highest accuracy(0.935),precision(0.923),sensitivity(recall:0.947),and F1 score(0.935).Model evaluation demonstrated superior predictive performance of ML algorithms over conventional LR,with training/validation areaunderthe curve(AUC)sof0.777/0.779(LR),0.921/0.918(BPNN),0.988/0.980(RF),0.980/0.955(DT),and 0.962/0.958(XGBoost).Calibration analysis revealed a better fit for DT,LR and BPNN compared to RF and XGBoost model.Based on the optimal performance of the RF model,the ranking of factors in descending order of importance was:hypertension,age,diabetes,systolic blood pressure,waist,high-density lipoprotein Cholesterol,fasting blood glucose,physical activity,BMI,low-density lipoprotein cholesterol,total cholesterol,dietary habits,and family history of stroke.Using Youden's index as the optimal cutoff,the RF model stratified individuals into high-risk(>0.789)and low-risk(≤0.789)groups with robust discrimination.Conclusions:The ML-based prediction models demonstrated superior performance metrics compared to conventional LR and the RF is the optimal prediction model,providing an effective tool for risk stratifi cation in primary stroke prevention in community settings. 展开更多
关键词 Big data Machine learning NURSING Prediction model STROKE
暂未订购
Field inversion and machine learning based on the Rubber-Band Spalart-Allmaras Model
19
作者 Chenyu Wu Yufei Zhang 《Theoretical & Applied Mechanics Letters》 2025年第2期122-130,共9页
Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leve... Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leveraging complex ML models,such as those developed using field inversion and machine learning(FIML),to dynamically adjust the constants within the baseline RANS model.However,the ML models often overlook the fundamental calibrations of the RANS turbulence model.Consequently,the basic calibration of the baseline RANS model is disrupted,leading to a degradation in the accuracy,particularly in basic wall-attached flows outside of the training set.To address this issue,a modified version of the Spalart-Allmaras(SA)turbulence model,known as Rubber-band SA(RBSA),has been proposed recently.This modification involves identifying and embedding constraints related to basic wall-attached flows directly into the model.It is shown that no matter how the parameters of the RBSA model are adjusted as constants throughout the flow field,its accuracy in wall-attached flows remains unaffected.In this paper,we propose a new constraint for the RBSA model,which better safeguards the law of wall in extreme conditions where the model parameter is adjusted dramatically.The resultant model is called the RBSA-poly model.We then show that when combined with FIML augmentation,the RBSA-poly model effectively preserves the accuracy of simple wall-attached flows,even when the adjusted parameters become functions of local flow variables rather than constants.A comparative analysis with the FIML-augmented original SA model reveals that the augmented RBSA-poly model reduces error in basic wall-attached flows by 50%while maintaining comparable accuracy in trained separated flows.These findings confirm the effectiveness of utilizing FIML in conjunction with the RBSA model,offering superior accuracy retention in cardinal flows. 展开更多
关键词 Turbulence modeling Field inversion Constrained-recalibration Machine learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部