期刊文献+
共找到1,370篇文章
< 1 2 69 >
每页显示 20 50 100
Variogram modelling optimisation using genetic algorithm and machine learning linear regression:application for Sequential Gaussian Simulations mapping
1
作者 André William Boroh Alpha Baster Kenfack Fokem +2 位作者 Martin Luther Mfenjou Firmin Dimitry Hamat Fritz Mbounja Besseme 《Artificial Intelligence in Geosciences》 2025年第1期177-190,共14页
The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of... The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries. 展开更多
关键词 Variogram modelling Genetic algorithm(GA) Machine learning Gravity data Mineral exploration
在线阅读 下载PDF
Adaptive Multi-Learning Cooperation Search Algorithm for Photovoltaic Model Parameter Identification
2
作者 Xu Chen Shuai Wang Kaixun He 《Computers, Materials & Continua》 2025年第10期1779-1806,共28页
Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in... Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms. 展开更多
关键词 Photovoltaic model parameter identification cooperation search algorithm adaptive multiple learning chaotic grouping reflection
在线阅读 下载PDF
A systematic data-driven modelling framework for nonlinear distillation processes incorporating data intervals clustering and new integrated learning algorithm
3
作者 Zhe Wang Renchu He Jian Long 《Chinese Journal of Chemical Engineering》 2025年第5期182-199,共18页
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie... The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation. 展开更多
关键词 Integrated learning algorithm Data intervals clustering Feature selection Application of artificial intelligence in distillation industry Data-driven modelling
在线阅读 下载PDF
Construction and validation of a machine learning algorithm-based predictive model for difficult colonoscopy insertion
4
作者 Ren-Xuan Gao Xin-Lei Wang +6 位作者 Ming-Jie Tian Xiao-Ming Li Jia-Jia Zhang Jun-Jing Wang Jing Gao Chao Zhang Zhi-Ting Li 《World Journal of Gastrointestinal Endoscopy》 2025年第7期149-161,共13页
BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intr... BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification. 展开更多
关键词 COLONOSCOPY Difficulty of colonoscopy insertion Machine learning algorithms Predictive model Logistic regression Least absolute shrinkage and selection operator regression Random forest
暂未订购
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
5
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
PM_(2.5) concentration prediction system combining fuzzy information granulation and multi-model ensemble learning
6
作者 Yamei Chen Jianzhou Wang +1 位作者 Runze Li Jialu Gao 《Journal of Environmental Sciences》 2025年第10期332-345,共14页
With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration predict... With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning. 展开更多
关键词 Air pollution prediction Fuzzy information granulation Meta-heuristic optimization algorithm Ensemble learning model Point interval prediction
原文传递
Large Language Models for Effective Detection of Algorithmically Generated Domains:A Comprehensive Review
7
作者 Hamed Alqahtani Gulshan Kumar 《Computer Modeling in Engineering & Sciences》 2025年第8期1439-1479,共41页
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me... Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems. 展开更多
关键词 Adversarial domains cyber threat detection domain generation algorithms large language models machine learning security
在线阅读 下载PDF
Adaptive learning algorithm based on mixture Gaussian background 被引量:9
8
作者 Zha Yufei Bi Duyan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期369-376,共8页
The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are... The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are inferred based on the maximum likelihood rule. Secondly, the forgetting factor and learning rate factor are redefined, and their still more general formulations are obtained by analyzing their practical functions. Lastly, the convergence of the proposed algorithm is proved to enable the estimation converge to a local maximum of the data likelihood function according to the stochastic approximation theory. The experiments show that the proposed learning algorithm excels the formers both in converging rate and accuracy. 展开更多
关键词 Mixture Gaussian model background model learning algorithm.
在线阅读 下载PDF
Advancing automated pupillometry:a practical deep learning model utilizing infrared pupil images
9
作者 Dai Guangzheng Yu Sile +2 位作者 Liu Ziming Yan Hairu He Xingru 《国际眼科杂志》 CAS 2024年第10期1522-1528,共7页
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos... AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application. 展开更多
关键词 PUPIL infrared image algorithm deep learning model
暂未订购
DeepSurNet-NSGA II:Deep Surrogate Model-Assisted Multi-Objective Evolutionary Algorithm for Enhancing Leg Linkage in Walking Robots
10
作者 Sayat Ibrayev Batyrkhan Omarov +1 位作者 Arman Ibrayeva Zeinel Momynkulov 《Computers, Materials & Continua》 SCIE EI 2024年第10期229-249,共21页
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o... This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations. 展开更多
关键词 Multi-objective optimization genetic algorithm surrogate model deep learning walking robots
在线阅读 下载PDF
Navigating challenges and opportunities of machine learning in hydrogen catalysis and production processes: Beyond algorithm development
11
作者 Mohd Nur Ikhmal Salehmin Sieh Kiong Tiong +5 位作者 Hassan Mohamed Dallatu Abbas Umar Kai Ling Yu Hwai Chyuan Ong Saifuddin Nomanbhay Swee Su Lim 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第12期223-252,共30页
With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a c... With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector. 展开更多
关键词 Machine learning Computational modeling HER catalyst synthesis Hydrogen energy Hydrogen production processes algorithm development
在线阅读 下载PDF
Runoff Modeling in Ungauged Catchments Using Machine Learning Algorithm-Based Model Parameters Regionalization Methodology 被引量:2
12
作者 Houfa Wu Jianyun Zhang +4 位作者 Zhenxin Bao Guoqing Wang Wensheng Wang Yanqing Yang Jie Wang 《Engineering》 SCIE EI CAS CSCD 2023年第9期93-104,共12页
Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization... Model parameters estimation is a pivotal issue for runoff modeling in ungauged catchments.The nonlinear relationship between model parameters and catchment descriptors is a major obstacle for parameter regionalization,which is the most widely used approach.Runoff modeling was studied in 38 catchments located in the Yellow–Huai–Hai River Basin(YHHRB).The values of the Nash–Sutcliffe efficiency coefficient(NSE),coefficient of determination(R2),and percent bias(PBIAS)indicated the acceptable performance of the soil and water assessment tool(SWAT)model in the YHHRB.Nine descriptors belonging to the categories of climate,soil,vegetation,and topography were used to express the catchment characteristics related to the hydrological processes.The quantitative relationships between the parameters of the SWAT model and the catchment descriptors were analyzed by six regression-based models,including linear regression(LR)equations,support vector regression(SVR),random forest(RF),k-nearest neighbor(kNN),decision tree(DT),and radial basis function(RBF).Each of the 38 catchments was assumed to be an ungauged catchment in turn.Then,the parameters in each target catchment were estimated by the constructed regression models based on the remaining 37 donor catchments.Furthermore,the similaritybased regionalization scheme was used for comparison with the regression-based approach.The results indicated that the runoff with the highest accuracy was modeled by the SVR-based scheme in ungauged catchments.Compared with the traditional LR-based approach,the accuracy of the runoff modeling in ungauged catchments was improved by the machine learning algorithms because of the outstanding capability to deal with nonlinear relationships.The performances of different approaches were similar in humid regions,while the advantages of the machine learning techniques were more evident in arid regions.When the study area contained nested catchments,the best result was calculated with the similarity-based parameter regionalization scheme because of the high catchment density and short spatial distance.The new findings could improve flood forecasting and water resources planning in regions that lack observed data. 展开更多
关键词 Parameters estimation Ungauged catchments Regionalization scheme Machine learning algorithms Soil and water assessment tool model
在线阅读 下载PDF
Machine learning prediction model for gray-level co-occurrence matrix features of synchronous liver metastasis in colorectal cancer
13
作者 Kai-Feng Yang Sheng-Jie Li +1 位作者 Jun Xu Yong-Bin Zheng 《World Journal of Gastrointestinal Surgery》 SCIE 2024年第6期1571-1581,共11页
BACKGROUND Synchronous liver metastasis(SLM)is a significant contributor to morbidity in colorectal cancer(CRC).There are no effective predictive device integration algorithms to predict adverse SLM events during the ... BACKGROUND Synchronous liver metastasis(SLM)is a significant contributor to morbidity in colorectal cancer(CRC).There are no effective predictive device integration algorithms to predict adverse SLM events during the diagnosis of CRC.AIM To explore the risk factors for SLM in CRC and construct a visual prediction model based on gray-level co-occurrence matrix(GLCM)features collected from magnetic resonance imaging(MRI).METHODS Our study retrospectively enrolled 392 patients with CRC from Yichang Central People’s Hospital from January 2015 to May 2023.Patients were randomly divided into a training and validation group(3:7).The clinical parameters and GLCM features extracted from MRI were included as candidate variables.The prediction model was constructed using a generalized linear regression model,random forest model(RFM),and artificial neural network model.Receiver operating characteristic curves and decision curves were used to evaluate the prediction model.RESULTS Among the 392 patients,48 had SLM(12.24%).We obtained fourteen GLCM imaging data for variable screening of SLM prediction models.Inverse difference,mean sum,sum entropy,sum variance,sum of squares,energy,and difference variance were listed as candidate variables,and the prediction efficiency(area under the curve)of the subsequent RFM in the training set and internal validation set was 0.917[95%confidence interval(95%CI):0.866-0.968]and 0.09(95%CI:0.858-0.960),respectively.CONCLUSION A predictive model combining GLCM image features with machine learning can predict SLM in CRC.This model can assist clinicians in making timely and personalized clinical decisions. 展开更多
关键词 Colorectal cancer Synchronous liver metastasis Gray-level co-occurrence matrix Machine learning algorithm Prediction model
暂未订购
Genetic algorithm-optimized backpropagation neural network establishes a diagnostic prediction model for diabetic nephropathy:Combined machine learning and experimental validation in mice 被引量:1
14
作者 WEI LIANG ZONGWEI ZHANG +5 位作者 KEJU YANG HONGTU HU QIANG LUO ANKANG YANG LI CHANG YUANYUAN ZENG 《BIOCELL》 SCIE 2023年第6期1253-1263,共11页
Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of D... Background:Diabetic nephropathy(DN)is the most common complication of type 2 diabetes mellitus and the main cause of end-stage renal disease worldwide.Diagnostic biomarkers may allow early diagnosis and treatment of DN to reduce the prevalence and delay the development of DN.Kidney biopsy is the gold standard for diagnosing DN;however,its invasive character is its primary limitation.The machine learning approach provides a non-invasive and specific criterion for diagnosing DN,although traditional machine learning algorithms need to be improved to enhance diagnostic performance.Methods:We applied high-throughput RNA sequencing to obtain the genes related to DN tubular tissues and normal tubular tissues of mice.Then machine learning algorithms,random forest,LASSO logistic regression,and principal component analysis were used to identify key genes(CES1G,CYP4A14,NDUFA4,ABCC4,ACE).Then,the genetic algorithm-optimized backpropagation neural network(GA-BPNN)was used to improve the DN diagnostic model.Results:The AUC value of the GA-BPNN model in the training dataset was 0.83,and the AUC value of the model in the validation dataset was 0.81,while the AUC values of the SVM model in the training dataset and external validation dataset were 0.756 and 0.650,respectively.Thus,this GA-BPNN gave better values than the traditional SVM model.This diagnosis model may aim for personalized diagnosis and treatment of patients with DN.Immunohistochemical staining further confirmed that the tissue and cell expression of NADH dehydrogenase(ubiquinone)1 alpha subcomplex,4-like 2(NDUFA4L2)in tubular tissue in DN mice were decreased.Conclusion:The GA-BPNN model has better accuracy than the traditional SVM model and may provide an effective tool for diagnosing DN. 展开更多
关键词 Diabetic nephropathy Renal tubule Machine learning Diagnostic model Genetic algorithm
暂未订购
Some Features of Neural Networks as Nonlinearly Parameterized Models of Unknown Systems Using an Online Learning Algorithm
15
作者 Leonid S. Zhiteckii Valerii N. Azarskov +1 位作者 Sergey A. Nikolaienko Klaudia Yu. Solovchuk 《Journal of Applied Mathematics and Physics》 2018年第1期247-263,共17页
This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm f... This paper deals with deriving the properties of updated neural network model that is exploited to identify an unknown nonlinear system via the standard gradient learning algorithm. The convergence of this algorithm for online training the three-layer neural networks in stochastic environment is studied. A special case where an unknown nonlinearity can exactly be approximated by some neural network with a nonlinear activation function for its output layer is considered. To analyze the asymptotic behavior of the learning processes, the so-called Lyapunov-like approach is utilized. As the Lyapunov function, the expected value of the square of approximation error depending on network parameters is chosen. Within this approach, sufficient conditions guaranteeing the convergence of learning algorithm with probability 1 are derived. Simulation results are presented to support the theoretical analysis. 展开更多
关键词 NEURAL Network Nonlinear model Online learning algorithm LYAPUNOV Func-tion PROBABILISTIC CONVERGENCE
在线阅读 下载PDF
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
16
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
原文传递
Selective Ensemble Extreme Learning Machine Modeling of Effluent Quality in Wastewater Treatment Plants 被引量:9
17
作者 Li-Jie Zhao 1,2 Tian-You Chai 2 De-Cheng Yuan 1 1 College of Information Engineering,Shenyang University of Chemical Technology,Shenyang 110042,China 2 State Key Laboratory of Synthetical Automation for Process Industries,Northeastern University,Shenyang 110189,China 《International Journal of Automation and computing》 EI 2012年第6期627-633,共7页
Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable perform... Real-time and reliable measurements of the effluent quality are essential to improve operating efficiency and reduce energy consumption for the wastewater treatment process.Due to the low accuracy and unstable performance of the traditional effluent quality measurements,we propose a selective ensemble extreme learning machine modeling method to enhance the effluent quality predictions.Extreme learning machine algorithm is inserted into a selective ensemble frame as the component model since it runs much faster and provides better generalization performance than other popular learning algorithms.Ensemble extreme learning machine models overcome variations in different trials of simulations for single model.Selective ensemble based on genetic algorithm is used to further exclude some bad components from all the available ensembles in order to reduce the computation complexity and improve the generalization performance.The proposed method is verified with the data from an industrial wastewater treatment plant,located in Shenyang,China.Experimental results show that the proposed method has relatively stronger generalization and higher accuracy than partial least square,neural network partial least square,single extreme learning machine and ensemble extreme learning machine model. 展开更多
关键词 Wastewater treatment process effluent quality prediction extreme learning machine selective ensemble model genetic algorithm.
原文传递
Iterative Learning Fault Diagnosis Algorithm for Non-uniform Sampling Hybrid System 被引量:2
18
作者 Hongfeng Tao Dapeng Chen Huizhong Yang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期534-542,共9页
For a class of non-uniform output sampling hybrid system with actuator faults and bounded disturbances,an iterative learning fault diagnosis algorithm is proposed.Firstly,in order to measure the impact of fault on sys... For a class of non-uniform output sampling hybrid system with actuator faults and bounded disturbances,an iterative learning fault diagnosis algorithm is proposed.Firstly,in order to measure the impact of fault on system between every consecutive output sampling instants,the actual fault function is transformed to obtain an equivalent fault model by using the integral mean value theorem,then the non-uniform sampling hybrid system is converted to continuous systems with timevarying delay based on the output delay method.Afterwards,an observer-based fault diagnosis filter with virtual fault is designed to estimate the equivalent fault,and the iterative learning regulation algorithm is chosen to update the virtual fault repeatedly to make it approximate the actual equivalent fault after some iterative learning trials,so the algorithm can detect and estimate the system faults adaptively.Simulation results of an electro-mechanical control system model with different types of faults illustrate the feasibility and effectiveness of this algorithm. 展开更多
关键词 Equivalent fault model fault diagnosis iterative learning algorithm non-uniform sampling hybrid system virtual fault
在线阅读 下载PDF
Missile-Target Situation Assessment Model Based on Reinorcement Learning 被引量:5
19
作者 ZHANG Yun LU Runyan CAI Yunze 《Journal of Shanghai Jiaotong university(Science)》 EI 2020年第5期561-568,共8页
In situation assessment(SA)of missile versus target fighter,the traditional SA models generally have the characteristics of strong subjectivity and poor dynamic adaptability.This paper considers SA as an expectation o... In situation assessment(SA)of missile versus target fighter,the traditional SA models generally have the characteristics of strong subjectivity and poor dynamic adaptability.This paper considers SA as an expectation of future returns and establishes a missile-target simulation battle model.The actor-critic(AC)algorithm in reinforcement learning(RL)is used to train the evaluation network,and a missile-target SA model is established in simulation battle training.Simulation and comparative experiments show that the model can effectively estimate the expected effect of missile attack under the current situation,and it provides an effective basis for missile attack decision. 展开更多
关键词 situation assessment(SA) battle model reinforcement learning(RL) actor-critic(AC)algorithm
原文传递
Q-learning算法优化的多种LSTM的超短期风电功率预测
20
作者 辛鹏 李超然 +2 位作者 张勋 刘培瑞 袁成磊 《吉林化工学院学报》 2024年第9期1-8,共8页
针对风电功率预测中存在的特征选择困难和单一模型不稳定问题,提出了一种融合Q-learning算法的多种LSTM网络(Q_L-L-C-A)的超短期风电功率预测方法。该方法利用最大信息系数(MIC)对风电数据进行特征筛选,采用变分模态分解(VMD)将风电场... 针对风电功率预测中存在的特征选择困难和单一模型不稳定问题,提出了一种融合Q-learning算法的多种LSTM网络(Q_L-L-C-A)的超短期风电功率预测方法。该方法利用最大信息系数(MIC)对风电数据进行特征筛选,采用变分模态分解(VMD)将风电场功率数据分解为多个频率模态作为额外特征,将筛选和分解后的数据作为模型输入,进行LSTM、CNN-LSTM、Attention-LSTM这3种网络模型预测。在此基础上,依据Q-learning算法对3种模型的预测结果进行动态权重分配,以获得更优的组合预测结果。为了验证所提出Q_L-L-C-A模型的预测效果,以某风电场实测数据为模型输入,与6种模型进行对比实验,实验结果表明:所提出的Q_L-L-C-A模型的均方根误差和平均绝对百分误差结果均优于LSTM、CNN-LSTM、Atten-LSTM等模型,Q_L-L-C-A模型在超短期风电功率预测中具有更高的准确性和稳定性。 展开更多
关键词 功率预测 组合模型 Q-learning算法 深度学习 最大信息系数 变分模态分解
在线阅读 下载PDF
上一页 1 2 69 下一页 到第
使用帮助 返回顶部