期刊文献+
共找到12,440篇文章
< 1 2 250 >
每页显示 20 50 100
Flood predictions from metrics to classes by multiple machine learning algorithms coupling with clustering-deduced membership degree
1
作者 ZHAI Xiaoyan ZHANG Yongyong +5 位作者 XIA Jun ZHANG Yongqiang TANG Qiuhong SHAO Quanxi CHEN Junxu ZHANG Fan 《Journal of Geographical Sciences》 2026年第1期149-176,共28页
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting... Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach. 展开更多
关键词 flood regime metrics class prediction machine learning algorithms hydrological model
原文传递
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:2
2
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
3
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis:A Systematic Literature Review
4
作者 Jungpil Shin Wahidur Rahman +5 位作者 Tanvir Ahmed Bakhtiar Mazrur Md.Mohsin Mia Romana Idress Ekfa Md.Sajib Rana Pankoo Kim 《Computers, Materials & Continua》 2025年第9期4105-4153,共49页
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi... Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management. 展开更多
关键词 Natural Language Processing(NLP) Machine learning(ml) sentiment analysis deep learning textual data
在线阅读 下载PDF
Methodology for Detecting Non-Technical Energy Losses Using an Ensemble of Machine Learning Algorithms
5
作者 Irbek Morgoev Roman Klyuev Angelika Morgoeva 《Computer Modeling in Engineering & Sciences》 2025年第5期1381-1399,共19页
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of... Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry. 展开更多
关键词 Non-technical losses smart grid machine learning electricity theft FRAUD ensemble algorithm hybrid method forecasting classification supervised learning
在线阅读 下载PDF
基于随机森林与Q-learning融合的多元电力数据存储优化决策方法
6
作者 叶学顺 贾东梨 +2 位作者 周俊 唐英 贾梓豪 《科学技术与工程》 北大核心 2026年第3期1065-1074,共10页
大规模和多样的电力数据存储面临效率低和内存容量不足的瓶颈问题。数据索引和数据压缩等传统数据存储优化方法各有优劣势,如何有效应用于电力数据存储是目前研究的难点。为了解决这个问题,提出了一种融合随机森林和Q-learning的多元电... 大规模和多样的电力数据存储面临效率低和内存容量不足的瓶颈问题。数据索引和数据压缩等传统数据存储优化方法各有优劣势,如何有效应用于电力数据存储是目前研究的难点。为了解决这个问题,提出了一种融合随机森林和Q-learning的多元电力数据存储优化决策方法。该方法中的关键技术包括:首先提出了基于改进随机森林算法的存储优化策略决策模型,引入信息增益方法,综合评价数据存储时对数据库的数据访问频率、查询时间、存储速度以及数据冗余率等因素影响,做出数据直接存储、数据索引存储和数据压缩存储的存储优化方法策略决策;其次提出了基于改进Q-learning算法的数据存储算法决策模型,引入多尺度学习机制、优先经验放回机制和正负向奖励机制,决策数据索引存储时适用的索引算法以及数据压缩存储时适用的数据压缩算法。本方法有效融合了数据索引与数据压缩的技术优势,大幅提升数据存储效率并节约存储空间,为大规模多元电力数据管理提供新的解决方案。 展开更多
关键词 随机森林算法 Q-learning算法 数据存储优化方法 数据索引算法 数据压缩算法
在线阅读 下载PDF
Neuromorphic devices assisted by machine learning algorithms
7
作者 Ziwei Huo Qijun Sun +4 位作者 Jinran Yu Yichen Wei Yifei Wang Jeong Ho Cho Zhong Lin Wang 《International Journal of Extreme Manufacturing》 2025年第4期178-215,共38页
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decisio... Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided. 展开更多
关键词 neuromorphic devices machine learning algorithms artificial synapses MEMRISTORS field-effect transistors
在线阅读 下载PDF
A Comparison among Different Machine Learning Algorithms in Land Cover Classification Based on the Google Earth Engine Platform: The Case Study of Hung Yen Province, Vietnam
8
作者 Le Thi Lan Tran Quoc Vinh Phạm Quy Giang 《Journal of Environmental & Earth Sciences》 2025年第1期132-139,共8页
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ... Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province. 展开更多
关键词 Google Earth Engine Land Cover LANDSAT Machine learning Algorithm
在线阅读 下载PDF
Optimization Algorithms Based on Double-Integral Coevolutionary Neurodynamics in Deep Learning
9
作者 Dan Su Jie Han +1 位作者 Chunhua Yang Weihua Gui 《IEEE/CAA Journal of Automatica Sinica》 2025年第6期1236-1245,共10页
Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable ... Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance. 展开更多
关键词 Coevolutionary neurodynamics(CND) deep learning GENERALIZATION noise resistance optimization algorithm
在线阅读 下载PDF
Reaction process optimization based on interpretable machine learning and metaheuristic optimization algorithms
10
作者 Dian Zhang Bo Ouyang Zheng-Hong Luo 《Chinese Journal of Chemical Engineering》 2025年第8期77-85,共9页
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u... The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes. 展开更多
关键词 Reaction process optimization Interpretable machine learning Metaheuristic optimization algorithm BIODIESEL
在线阅读 下载PDF
基于Q-Learning的多模态自适应光伏功率优化组合预测
11
作者 隗知初 杨苹 +3 位作者 周钱雨凡 陈文皓 万思洋 崔嘉雁 《电力工程技术》 北大核心 2026年第1期115-124,163,共11页
针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模... 针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模态序列最敏感的气象因素。然后,构建反向传播神经网络、双向长短期记忆网络、门控循环单元网络和时间卷积网络4种基础预测模型。考虑到不同模型对不同频率特征的子序列预测能力不同,利用Q-Learning算法自适应选择各模态对应的最优基础模型组合方式。最后,将不同子模态的预测结果叠加重构,得到最终预测结果,并利用高分辨率光伏气象功率数据集进行验证。结果证明,文中所提出的基于Q-Learning的多模态自适应光伏功率优化组合预测模型,相较于单一模型的预测误差平均绝对误差下降了16.18%,均方误差下降了17.00%。 展开更多
关键词 鲸鱼优化算法 变分模态分解 Q-learning 功率预测 组合模型 光伏发电
在线阅读 下载PDF
Development and validation of machine learningbased in-hospital mortality predictive models for acute aortic syndrome in emergency departments
12
作者 Yuanwei Fu Yilan Yang +6 位作者 Hua Zhang Daidai Wang Qiangrong Zhai Lanfang Du Nijiati Muyesai YanxiaGao Qingbian Ma 《World Journal of Emergency Medicine》 2026年第1期43-49,共7页
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita... BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation. 展开更多
关键词 Emergency department Acute aortic syndrome MORTALITY Predictive model Machine learning algorithms
暂未订购
Synergistic machine learning and DFT screening strategy:Accelerating discovery of efficient perovskite passivators
13
作者 Jianghao Liu Hongyan Lv +4 位作者 Pengyang Wang Guofu Hou Ying Zhao Xiaodan Zhang Qian Huang 《Journal of Energy Chemistry》 2026年第1期56-63,I0003,共9页
Efficient surface passivation is critical for achieving high-performance perovskite solar cells(PSCs),yet the discovery of optimal passivators remains a time-consuming,trial-and-error process.Here,we report a synergis... Efficient surface passivation is critical for achieving high-performance perovskite solar cells(PSCs),yet the discovery of optimal passivators remains a time-consuming,trial-and-error process.Here,we report a synergistic machine learning(ML)and density functional theory(DFT)approach that enables predictive and rapid identification of effective passivation materials.By training an XGBoost model(91.3%accuracy)with DFT-derived molecular descriptors and activity calculations,we identify 2-(4-aminophenyl)-3H-benzimidazol-5-amine(APBIA)as a promising passivator.Experimental validation demonstrates that APBIA effectively removes surface impurities and passivates defects within perovskite films,leading to a significant increase in power conversion efficiency(PCE)from 22.48%to 25.55%(certified as 25.02%).This ML-DFT framework provides a generalizable pathway for accelerating the development of advanced functional materials for photovoltaic applications. 展开更多
关键词 Perovskite solar cells Machine learning(ml) Density functional theory(DFT) Passivators Organic molecule
在线阅读 下载PDF
A dual attention-based deep learning model for lithology identificationwhile drilling
14
作者 Jie Chen Zhen Gui +6 位作者 Yichao Rui Xusheng Zhao Xiaokang Pan Qingfeng Wang Yuanyuan Pu Zheng Li Maoyi Liu 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第2期1177-1192,共16页
Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex ge... Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling. 展开更多
关键词 Lithology identificationwhile drilling Deep learning Dual attention mechanism Metaheuristic algorithm Field applications
在线阅读 下载PDF
Robust and Efficient Federated Learning for Machinery Fault Diagnosis in Internet of Things
15
作者 Zhen Wu Hao Liu +4 位作者 Linlin Zhang Zehui Zhang Jie Wu Haibin He Bin Zhou 《Computers, Materials & Continua》 2026年第4期1051-1069,共19页
Recently,Internet ofThings(IoT)has been increasingly integrated into the automotive sector,enabling the development of diverse applications such as the Internet of Vehicles(IoV)and intelligent connected vehicles.Lever... Recently,Internet ofThings(IoT)has been increasingly integrated into the automotive sector,enabling the development of diverse applications such as the Internet of Vehicles(IoV)and intelligent connected vehicles.Leveraging IoVtechnologies,operational data fromcore vehicle components can be collected and analyzed to construct fault diagnosis models,thereby enhancing vehicle safety.However,automakers often struggle to acquire sufficient fault data to support effective model training.To address this challenge,a robust and efficient federated learning method(REFL)is constructed for machinery fault diagnosis in collaborative IoV,which can organize multiple companies to collaboratively develop a comprehensive fault diagnosis model while keeping their data locally.In the REFL,the gradient-based adversary algorithm is first introduced to the fault diagnosis field to enhance the deep learning model robustness.Moreover,the adaptive gradient processing process is designed to improve the model training speed and ensure the model accuracy under unbalance data scenarios.The proposed REFL is evaluated on non-independent and identically distributed(non-IID)real-world machinery fault dataset.Experiment results demonstrate that the REFL can achieve better performance than traditional learning methods and are promising for real industrial fault diagnosis. 展开更多
关键词 Federated learning adversary algorithm Internet of Vehicles(IoV) fault diagnosis
在线阅读 下载PDF
A novel approach to identify the spatial characteristics of ozone-precursor sensitivity based on interpretable machine learning
16
作者 Huiling He Kaihui Zhao +6 位作者 Zibing Yuan Jin Shen Yujun Lin Shu Zhang Menglei Wang Anqi Wang Puyu Lian 《Journal of Environmental Sciences》 2026年第1期54-63,共10页
To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) po... To curb the worsening tropospheric ozone(O_(3))pollution problem in China,a rapid and accurate identification of O_(3)-precursor sensitivity(OPS)is a crucial prerequisite for formulating effective contingency O_(3) pollution control strategies.However,currently widely-used methods,such as statistical models and numerical models,exhibit inherent limitations in identifying OPS in a timely and accurate manner.In this study,we developed a novel approach to identify OPS based on eXtreme Gradient Boosting model,Shapley additive explanation(SHAP)al-gorithm,and volatile organic compound(VOC)photochemical decay adjustment,using the meteorology and speciated pollutant monitoring data as the input.By comparing the difference in SHAP values between base sce-nario and precursor reduction scenario for nitrogen oxides(NO_(x))and VOCs,OPS was divided into NO_(x)-limited,VOCs-limited and transition regime.Using the long-lasting O_(3) pollution episode in the autumn of 2022 at the Guangdong-Hong Kong-Macao Greater Bay Area(GBA)as an example,we demonstrated large spatiotemporal heterogeneities of OPS over the GBA,which were generally shifted from NO_(x)-limited to VOCs-limited from September to October and more inclined to be VOCs-limited at the central and NO_(x)-limited in the peripheral areas.This study developed an innovative OPS identification method by comparing the difference in SHAP value before and after precursor emission reduction.Our method enables the accurate identification of OPS in the time scale of seconds,thereby providing a state-of-the-art tool for the rapid guidance of spatial-specific O_(3) control strategies. 展开更多
关键词 O_(3)-precursor sensitivity Machine learning Extreme gradient boosting model Shapley algorithm Greater bay area
原文传递
基于深度Q-learning算法的智能电网管控模型研究
17
作者 王筠 李志鹏 +2 位作者 项旭 张军堂 石雷波 《自动化技术与应用》 2026年第2期54-57,142,共5页
设计基于深度Q-learning算法的智能电网管控模型,将可验证声明(verifiable credential, VC)和分布式数字身份(decentralized identity, DID)作为应用程序身份凭证与软件定义网络(software-defined networking, SDN)控制器,结合动态信任... 设计基于深度Q-learning算法的智能电网管控模型,将可验证声明(verifiable credential, VC)和分布式数字身份(decentralized identity, DID)作为应用程序身份凭证与软件定义网络(software-defined networking, SDN)控制器,结合动态信任评估算法与基于属性的访问控制策略,构建基于区块链的智能电网分布式SDN管控模型。在资源分配、网络拓扑动态变化以及安全威胁不断演变的情况下,实施基于区块链的分布式SDN网络的优化。实验测试结果表明,设计方法在通过深度Q-learning优化模型后累积奖励明显大幅增加,在多种安全性能方面表现出色,能够清除恶意域,确保网络环境的安全。 展开更多
关键词 SDN控制器 分布式SDN网络 深度Q-learning算法 区块链 智能电网管控模型
在线阅读 下载PDF
玻尔兹曼优化Q-learning的高速铁路越区切换控制算法 被引量:4
18
作者 陈永 康婕 《控制理论与应用》 北大核心 2025年第4期688-694,共7页
针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误... 针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误码率等构建Q-learning算法回报函数;然后,提出玻尔兹曼搜索策略优化动作选择,以提高切换算法收敛性能;最后,综合考虑基站同频干扰的影响进行Q表更新,得到切换判决参数,从而控制切换执行.仿真结果表明:改进算法在不同运行速度和不同运行场景下,较传统算法能有效提高切换成功率,且满足无线通信服务质量QoS的要求. 展开更多
关键词 越区切换 5G-R Q-learning算法 玻尔兹曼优化策略
在线阅读 下载PDF
Landslide susceptibility mapping using machine learning algorithms and comparison of their performance at Abha Basin,Asir Region,Saudi Arabia 被引量:20
19
作者 Ahmed Mohamed Youssef Hamid Reza Pourghasemi 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第2期639-655,共17页
The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artifici... The current study aimed at evaluating the capabilities of seven advanced machine learning techniques(MLTs),including,Support Vector Machine(SVM),Random Forest(RF),Multivariate Adaptive Regression Spline(MARS),Artificial Neural Network(ANN),Quadratic Discriminant Analysis(QDA),Linear Discriminant Analysis(LDA),and Naive Bayes(NB),for landslide susceptibility modeling and comparison of their performances.Coupling machine learning algorithms with spatial data types for landslide susceptibility mapping is a vitally important issue.This study was carried out using GIS and R open source software at Abha Basin,Asir Region,Saudi Arabia.First,a total of 243 landslide locations were identified at Abha Basin to prepare the landslide inventory map using different data sources.All the landslide areas were randomly separated into two groups with a ratio of 70%for training and 30%for validating purposes.Twelve landslide-variables were generated for landslide susceptibility modeling,which include altitude,lithology,distance to faults,normalized difference vegetation index(NDVI),landuse/landcover(LULC),distance to roads,slope angle,distance to streams,profile curvature,plan curvature,slope length(LS),and slope-aspect.The area under curve(AUC-ROC)approach has been applied to evaluate,validate,and compare the MLTs performance.The results indicated that AUC values for seven MLTs range from 89.0%for QDA to 95.1%for RF.Our findings showed that the RF(AUC=95.1%)and LDA(AUC=941.7%)have produced the best performances in comparison to other MLTs.The outcome of this study and the landslide susceptibility maps would be useful for environmental protection. 展开更多
关键词 Landslide susceptibility Machine learning algorithms Variables importance Saudi Arabia
在线阅读 下载PDF
Intelligent modelling of clay compressibility using hybrid meta-heuristic and machine learning algorithms 被引量:8
20
作者 Pin Zhang Zhen-Yu Yin +2 位作者 Yin-Fu Jin Tommy HTChan Fu-Ping Gao 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第1期441-452,共12页
Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.T... Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.The performance of five commonly used machine learning(ML)algorithms,i.e.back-propagation neural network(BPNN),extreme learning machine(ELM),support vector machine(SVM),random forest(RF)and evolutionary polynomial regression(EPR)in predicting Cc is comprehensively investigated.A database with a total number of 311 datasets including three input variables,i.e.initial void ratio e0,liquid limit water content wL,plasticity index Ip,and one output variable Cc is first established.Genetic algorithm(GA)is used to optimize the hyper-parameters in five ML algorithms,and the average prediction error for the 10-fold cross-validation(CV)sets is set as thefitness function in the GA for enhancing the robustness of ML models.The results indicate that ML models outperform empirical prediction formulations with lower prediction error.RF yields the lowest error followed by BPNN,ELM,EPR and SVM.If the ranges of input variables in the database are large enough,BPNN and RF models are recommended to predict Cc.Furthermore,if the distribution of input variables is continuous,RF model is the best one.Otherwise,EPR model is recommended if the ranges of input variables are small.The predicted correlations between input and output variables using five ML models show great agreement with the physical explanation. 展开更多
关键词 COMPRESSIBILITY Clays Machine learning Optimization Random forest Genetic algorithm
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部