期刊文献+
共找到1,443篇文章
< 1 2 73 >
每页显示 20 50 100
Development and validation of machine learningbased in-hospital mortality predictive models for acute aortic syndrome in emergency departments
1
作者 Yuanwei Fu Yilan Yang +6 位作者 Hua Zhang Daidai Wang Qiangrong Zhai Lanfang Du Nijiati Muyesai YanxiaGao Qingbian Ma 《World Journal of Emergency Medicine》 2026年第1期43-49,共7页
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita... BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation. 展开更多
关键词 Emergency department Acute aortic syndrome MORTALITY Predictive model Machine learning algorithmS
暂未订购
Flood predictions from metrics to classes by multiple machine learning algorithms coupling with clustering-deduced membership degree
2
作者 ZHAI Xiaoyan ZHANG Yongyong +5 位作者 XIA Jun ZHANG Yongqiang TANG Qiuhong SHAO Quanxi CHEN Junxu ZHANG Fan 《Journal of Geographical Sciences》 2026年第1期149-176,共28页
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting... Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach. 展开更多
关键词 flood regime metrics class prediction machine learning algorithms hydrological model
原文传递
Adaptive learning algorithm based on mixture Gaussian background 被引量:9
3
作者 Zha Yufei Bi Duyan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期369-376,共8页
The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are... The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are inferred based on the maximum likelihood rule. Secondly, the forgetting factor and learning rate factor are redefined, and their still more general formulations are obtained by analyzing their practical functions. Lastly, the convergence of the proposed algorithm is proved to enable the estimation converge to a local maximum of the data likelihood function according to the stochastic approximation theory. The experiments show that the proposed learning algorithm excels the formers both in converging rate and accuracy. 展开更多
关键词 Mixture Gaussian model background model learning algorithm.
在线阅读 下载PDF
Adaptive Multi-Learning Cooperation Search Algorithm for Photovoltaic Model Parameter Identification
4
作者 Xu Chen Shuai Wang Kaixun He 《Computers, Materials & Continua》 2025年第10期1779-1806,共28页
Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in... Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms. 展开更多
关键词 Photovoltaic model parameter identification cooperation search algorithm adaptive multiple learning chaotic grouping reflection
在线阅读 下载PDF
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
5
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm Data intervals clustering Feature selection Application of artificial intelligence in distillation industry Data-driven modelling
在线阅读 下载PDF
Construction and validation of a machine learning algorithm-based predictive model for difficult colonoscopy insertion
6
作者 Ren-Xuan Gao Xin-Lei Wang +6 位作者 Ming-Jie Tian Xiao-Ming Li Jia-Jia Zhang Jun-Jing Wang Jing Gao Chao Zhang Zhi-Ting Li 《World Journal of Gastrointestinal Endoscopy》 2025年第7期149-161,共13页
BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intr... BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification. 展开更多
关键词 COLONOSCOPY Difficulty of colonoscopy insertion Machine learning algorithms Predictive model Logistic regression Least absolute shrinkage and selection operator regression Random forest
暂未订购
Runoff Modeling in Ungauged Catchments Using Machine Learning Algorithm-Based Model Parameters Regionalization Methodology 被引量:2
7
作者 Houfa Wu Jianyun Zhang +4 位作者 Zhenxin Bao Guoqing Wang Wensheng Wang Yanqing Yang Jie Wang 《Engineering》 SCIE EI CAS CSCD 2023年第9期93-104,共12页
Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization... Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization,which is the most widely used approach.Runoff modeling was studied in 38 catchments located in the Yellow–Huai–Hai River Basin(YHHRB).The values of the Nash–Sutcliffe efficiency coefficient(NSE),coefficient of determination(R2),and percent bias(PBIAS)indicated the acceptable performance of the soil and water assessment tool(SWAT)model in the YHHRB.Nine descriptors belonging to the categories of climate,soil,vegetation,and topography were used to express the catchment characteristics related to the hydrological processes.The quantitative relationships between the parameters of the SWAT model and the catchment descriptors were analyzed by six regression-based models,including linear regression(LR)equations,support vector regression(SVR),random forest(RF),k-nearest neighbor(kNN),decision tree(DT),and radial basis function(RBF).Each of the 38 catchments was assumed to be an ungauged catchment in turn.Then,the parameters in each target catchment were estimated by the constructed regression models based on the remaining 37 donor catchments.Furthermore,the similaritybased regionalization scheme was used for comparison with the regression-based approach.The results indicated that the runoff with the highest accuracy was modeled by the SVR-based scheme in ungauged catchments.Compared with the traditional LR-based approach,the accuracy of the runoff modeling in ungauged catchments was improved by the machine learning algorithms because of the outstanding capability to deal with nonlinear relationships.The performances of different approaches were similar in humid regions,while the advantages of the machine learning techniques were more evident in arid regions.When the study area contained nested catchments,the best result was calculated with the similarity-based parameter regionalization scheme because of the high catchment density and short spatial distance.The new findings could improve flood forecasting and water resources planning in regions that lack observed data. 展开更多
关键词 Parameters estimation Ungauged catchments Regionalization scheme Machine learning algorithms Soil and water assessment tool model
在线阅读 下载PDF
Genetic algorithm-optimized backpropagation neural network establishes a diagnostic prediction model for diabetic nephropathy:Combined machine learning and experimental validation in mice 被引量:1
8
作者 WEI LIANG ZONGWEI ZHANG +5 位作者 KEJU YANG HONGTU HU QIANG LUO ANKANG YANG LI CHANG YUANYUAN ZENG 《BIOCELL》 SCIE 2023年第6期1253-1263,共11页
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D... Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN. 展开更多
关键词 Diabetic nephropathy Renal tubule Machine learning Diagnostic model Genetic algorithm
暂未订购
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
9
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
The distribution modeling and analysis of Antarctic krill:impacts of algorithm and spatial resolution
10
作者 LI Wenxiong YING Yiping +5 位作者 ZHANG Jichang ZHAO Yunxia ZHU Jiancheng FAN Gangzhou MU Xiuxia WANG Xinliang 《Advances in Polar Science》 2025年第4期373-391,共19页
Antarctic krill(Euphausia superba),widely distributes around Antarctica,is a key species supporting the biodiversity of the Southern Ocean ecosystem.The Commission for the Conservation of Antarctic Marine Living Resou... Antarctic krill(Euphausia superba),widely distributes around Antarctica,is a key species supporting the biodiversity of the Southern Ocean ecosystem.The Commission for the Conservation of Antarctic Marine Living Resources(CCAMLR)has thus managed the krill fishery according to a precautionary way.Currently,CCAMLR is making effort to develop a refined krill fishery management approach based on more solid science,which requires accurate predictions of krill distribution.To address this need,this study investigated the effects of algorithm and spatial resolution on the performance of Antarctic krill distribution modelling.We integrated acoustic data from 4 surveys conducted in the waters adjacent to the Antarctic Peninsula with 11 environmental variables characterizing krill prey conditions,water mass properties,and seafloor topography.These data were processed at 4 spatial resolutions(5,10,15,and 20 km)to fit distribution models using 4 algorithms:Random Forests(RF),Generalized Additive Models(GAM),Extreme Gradient Boosting(XGBoost),and Artificial Neural Networks(ANN).Model performance was assessed and compared in terms of goodness-of-fit and predictive accuracy.The results showed that RF achieved the highest predictive performance at most resolutions,whereas GAM performed best at the coarsest resolution(20 km).XGBoost closely following RF in accuracy and demonstrated robustness as evidenced by the highly consistent partial dependence curves across resolutions.In contrast,ANN exhibited limitations with smaller sample sizes,resulting in comparatively poorer predictive performance.The analysis revealed a trade-off whereby reducing spatial resolution improved model fit and mitigated zero-inflation at the expense of fine-scale information and overall predictive accuracy.Ensemble models,integrating RF,GAM,and XGBoost,are proposed as potential balanced solutions to improve predictive stability,offering a more robust scientific basis for the refinement of krill management. 展开更多
关键词 Antarctic krill species distribution model algorithm selection spatial resolution machine learning
在线阅读 下载PDF
Integrated optimization of reservoir production and layer configurations using relational and regression machine learning models
11
作者 Qin-Yang Dai Li-Ming Zhang +6 位作者 Kai Zhang Hao Hao Guo-Dong Chen Xia Yan Pi-Yang Liu Bao-Bin Zhang Chen-Yang Wang 《Petroleum Science》 2025年第9期3745-3759,共15页
This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational mach... This study introduces a novel approach to addressing the challenges of high-dimensional variables and strong nonlinearity in reservoir production and layer configuration optimization.For the first time,relational machine learning models are applied in reservoir development optimization.Traditional regression-based models often struggle in complex scenarios,but the proposed relational and regression-based composite differential evolution(RRCODE)method combines a Gaussian naive Bayes relational model with a radial basis function network regression model.This integration effectively captures complex relationships in the optimization process,improving both accuracy and convergence speed.Experimental tests on a multi-layer multi-channel reservoir model,the Egg reservoir model,and a real-field reservoir model(the S reservoir)demonstrate that RRCODE significantly reduces water injection and production volumes while increasing economic returns and cumulative oil recovery.Moreover,the surrogate models employed in RRCODE exhibit lightweight characteristics with low computational overhead.These results highlight RRCODE's superior performance in the integrated optimization of reservoir production and layer configurations,offering more efficient and economically viable solutions for oilfield development. 展开更多
关键词 Surrogate model Reservoir management Evolutionary algorithm Joint optimization Layer configuration Production optimization Relational learning
原文传递
Machine Learning Models for Early Warning of Coastal Flooding and Storm Surges
12
作者 Puja Gholap Ranjana Gore +5 位作者 Dipa Dattatray Dharmadhikari Jyoti Deone Shwetal Kishor Patil Swapnil S.Chaudhari Aarti Puri Shital Yashwant Waware 《Sustainable Marine Structures》 2025年第3期136-156,共21页
Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall ... Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management. 展开更多
关键词 Coastal Flood Forecasting Deep learning algorithms Early Warning Systems(EWS) Machine learning models Real-Time Flood Monitoring Storm Surge Prediction
在线阅读 下载PDF
PM_(2.5) concentration prediction system combining fuzzy information granulation and multi-model ensemble learning
13
作者 Yamei Chen Jianzhou Wang +1 位作者 Runze Li Jialu Gao 《Journal of Environmental Sciences》 2025年第10期332-345,共14页
With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration predict... With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning. 展开更多
关键词 Air pollution prediction Fuzzy information granulation Meta-heuristic optimization algorithm Ensemble learning model Point interval prediction
原文传递
Large Language Models for Effective Detection of Algorithmically Generated Domains:A Comprehensive Review
14
作者 Hamed Alqahtani Gulshan Kumar 《Computer Modeling in Engineering & Sciences》 2025年第8期1439-1479,共41页
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me... Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems. 展开更多
关键词 Adversarial domains cyber threat detection domain generation algorithms large language models machine learning security
在线阅读 下载PDF
Back analysis of rock mass parameters in mechanized twin tunnels based on coupled auto machine learning and multi-objective optimization algorithm
15
作者 Chengwen Wang Xiaoli Liu +4 位作者 Jiubao Li Enzhi Wang Nan Hu Wenli Yao Zhihui He 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第11期7038-7055,共18页
Accurate determination of rock mass parameters is essential for ensuring the accuracy of numericalsimulations. Displacement back-analysis is the most widely used method;however, the reliability of thecurrent approache... Accurate determination of rock mass parameters is essential for ensuring the accuracy of numericalsimulations. Displacement back-analysis is the most widely used method;however, the reliability of thecurrent approaches remains unsatisfactory. Therefore, in this paper, a multistage rock mass parameterback-analysis method, that considers the construction process and displacement losses is proposed andimplemented through the coupling of numerical simulation, auto-machine learning (AutoML), andmulti-objective optimization algorithms (MOOAs). First, a parametric modeling platform for mechanizedtwin tunnels is developed, generating a dataset through extensive numerical simulations. Next, theAutoML method is utilized to establish a surrogate model linking rock parameters and displacements.The tunnel construction process is divided into multiple stages, transforming the rock mass parameterback-analysis into a multi-objective optimization problem, for which multi-objective optimization algorithmsare introduced to obtain the rock mass parameters. The newly proposed rock mass parameterback-analysis method is validated in a mechanized twin tunnel project, and its accuracy and effectivenessare demonstrated. Compared with traditional single-stage back-analysis methods, the proposedmodel decreases the average absolute percentage error from 12.73% to 4.34%, significantly improving theaccuracy of the back-analysis. Moreover, although the accuracy of back analysis significantly increaseswith the number of construction stages considered, the back analysis time is acceptable. This studyprovides a new method for displacement back analysis that is efficient and accurate, thereby paving theway for precise parameter determination in numerical simulations. 展开更多
关键词 Back analysis of rock parameters Auto machine learning Multi-objective optimization algorithm Mechanized twin tunnels Parametric modeling
在线阅读 下载PDF
基于Q-Learning的多模态自适应光伏功率优化组合预测
16
作者 隗知初 杨苹 +3 位作者 周钱雨凡 陈文皓 万思洋 崔嘉雁 《电力工程技术》 北大核心 2026年第1期115-124,163,共11页
针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模... 针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模态序列最敏感的气象因素。然后,构建反向传播神经网络、双向长短期记忆网络、门控循环单元网络和时间卷积网络4种基础预测模型。考虑到不同模型对不同频率特征的子序列预测能力不同,利用Q-Learning算法自适应选择各模态对应的最优基础模型组合方式。最后,将不同子模态的预测结果叠加重构,得到最终预测结果,并利用高分辨率光伏气象功率数据集进行验证。结果证明,文中所提出的基于Q-Learning的多模态自适应光伏功率优化组合预测模型,相较于单一模型的预测误差平均绝对误差下降了16.18%,均方误差下降了17.00%。 展开更多
关键词 鲸鱼优化算法 变分模态分解 Q-learning 功率预测 组合模型 光伏发电
在线阅读 下载PDF
Some Features of Neural Networks as Nonlinearly Parameterized Models of Unknown Systems Using an Online Learning Algorithm
17
作者 Leonid S. Zhiteckii Valerii N. Azarskov +1 位作者 Sergey A. Nikolaienko Klaudia Yu. Solovchuk 《Journal of Applied Mathematics and Physics》 2018年第1期247-263,共17页
This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm f... This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis. 展开更多
关键词 NEURAL Network Nonlinear model Online learning algorithm LYAPUNOV Func-tion PROBABILISTIC CONVERGENCE
在线阅读 下载PDF
A novel approach to identify the spatial characteristics of ozone-precursor sensitivity based on interpretable machine learning
18
作者 Huiling He Kaihui Zhao +6 位作者 Zibing Yuan Jin Shen Yujun Lin Shu Zhang Menglei Wang Anqi Wang Puyu Lian 《Journal of Environmental Sciences》 2026年第1期54-63,共10页
To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) po... To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) pollution control strategies.However,currently widely-used methods,such as statistical models and numerical models,exhibit inherent limitations in identifying OPS in a timely and accurate manner.In this study,we developed a novel approach to identify OPS based on eXtreme Gradient Boosting model,Shapley additive explanation(SHAP)al-gorithm,and volatile organic compound(VOC)photochemical decay adjustment,using the meteorology and speciated pollutant monitoring data as the input.By comparing the difference in SHAP values between base sce-nario and precursor reduction scenario for nitrogen oxides(NO_(x))and VOCs,OPS was divided into NO_(x)-limited,VOCs-limited and transition regime.Using the long-lasting O_(3) pollution episode in the autumn of 2022 at the Guangdong-Hong Kong-Macao Greater Bay Area(GBA)as an example,we demonstrated large spatiotemporal heterogeneities of OPS over the GBA,which were generally shifted from NO_(x)-limited to VOCs-limited from September to October and more inclined to be VOCs-limited at the central and NO_(x)-limited in the peripheral areas.This study developed an innovative OPS identification method by comparing the difference in SHAP value before and after precursor emission reduction.Our method enables the accurate identification of OPS in the time scale of seconds,thereby providing a state-of-the-art tool for the rapid guidance of spatial-specific O_(3) control strategies. 展开更多
关键词 O_(3)-precursor sensitivity Machine learning Extreme gradient boosting model Shapley algorithm Greater bay area
原文传递
基于深度Q-learning算法的智能电网管控模型研究
19
作者 王筠 李志鹏 +2 位作者 项旭 张军堂 石雷波 《自动化技术与应用》 2026年第2期54-57,142,共5页
设计基于深度Q-learning算法的智能电网管控模型,将可验证声明(verifiable credential, VC)和分布式数字身份(decentralized identity, DID)作为应用程序身份凭证与软件定义网络(software-defined networking, SDN)控制器,结合动态信任... 设计基于深度Q-learning算法的智能电网管控模型,将可验证声明(verifiable credential, VC)和分布式数字身份(decentralized identity, DID)作为应用程序身份凭证与软件定义网络(software-defined networking, SDN)控制器,结合动态信任评估算法与基于属性的访问控制策略,构建基于区块链的智能电网分布式SDN管控模型。在资源分配、网络拓扑动态变化以及安全威胁不断演变的情况下,实施基于区块链的分布式SDN网络的优化。实验测试结果表明,设计方法在通过深度Q-learning优化模型后累积奖励明显大幅增加,在多种安全性能方面表现出色,能够清除恶意域,确保网络环境的安全。 展开更多
关键词 SDN控制器 分布式SDN网络 深度Q-learning算法 区块链 智能电网管控模型
在线阅读 下载PDF
Selective Ensemble Extreme Learning Machine Modeling of Effluent Quality in Wastewater Treatment Plants 被引量:7
20
作者 Li-Jie Zhao Tian-You Chai De-Cheng Yuan 《International Journal of Automation and computing》 EI 2012年第6期627-633,共7页
Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable perform... Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable performance of the traditional effluent quality measurements,we propose a selective ensemble extreme learning machine modeling method to enhance the effluent quality predictions.Extreme learning machine algorithm is inserted into a selective ensemble frame as the component model since it runs much faster and provides better generalization performance than other popular learning algorithms.Ensemble extreme learning machine models overcome variations in different trials of simulations for single model.Selective ensemble based on genetic algorithm is used to further exclude some bad components from all the available ensembles in order to reduce the computation complexity and improve the generalization performance.The proposed method is verified with the data from an industrial wastewater treatment plant,located in Shenyang,China.Experimental results show that the proposed method has relatively stronger generalization and higher accuracy than partial least square,neural network partial least square,single extreme learning machine and ensemble extreme learning machine model. 展开更多
关键词 Wastewater treatment process effluent quality prediction extreme learning machine selective ensemble model genetic algorithm.
原文传递
上一页 1 2 73 下一页 到第
使用帮助 返回顶部