期刊文献+
共找到12,673篇文章
< 1 2 250 >
每页显示 20 50 100
Research Review of Deep Learning Algorithms for Agricultural Disease Image Classification
1
作者 Shengjiu JIANG Qian WANG 《Plant Diseases and Pests》 2026年第1期30-34,共5页
In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention a... In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation. 展开更多
关键词 Agricultural disease image Classification algorithm Deep learning Research Review
在线阅读 下载PDF
Flood predictions from metrics to classes by multiple machine learning algorithms coupling with clustering-deduced membership degree
2
作者 ZHAI Xiaoyan ZHANG Yongyong +5 位作者 XIA Jun ZHANG Yongqiang TANG Qiuhong SHAO Quanxi CHEN Junxu ZHANG Fan 《Journal of Geographical Sciences》 2026年第1期149-176,共28页
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting... Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach. 展开更多
关键词 flood regime metrics class prediction machine learning algorithms hydrological model
原文传递
基于GWO-LSTM-MLP组合神经网络的干热岩裂隙渗流出口温度预测研究
3
作者 刘先珊 于明智 +5 位作者 白冰 潘玉华 郑志伟 孙梦 杨文远 刘洋 《应用基础与工程科学学报》 北大核心 2026年第1期223-235,共13页
在干热岩研究与开发利用过程中,岩体裂隙中的水-岩换热行为是地热工程设计中的核心问题,实现渗流出口水温的准确预测,可大量减少工程成本和能源损耗.使用多场三轴实验系统对U50mm×100mm的花岗岩裂隙试样开展不同环境温度、体积流... 在干热岩研究与开发利用过程中,岩体裂隙中的水-岩换热行为是地热工程设计中的核心问题,实现渗流出口水温的准确预测,可大量减少工程成本和能源损耗.使用多场三轴实验系统对U50mm×100mm的花岗岩裂隙试样开展不同环境温度、体积流速下的对流换热实验,建立渗流传热实验数据集,使用灰狼优化算法(Grey Wolf Optimization,GWO)对LSTM-MLP组合神经网络进行参数优选.长短期记忆神经网络(Long Short-Term Memory,LSTM)用于捕捉渗流传热过程中的时间依赖性,多层感知机(Multi-Layer Perceptron,MLP)则用于提取非线性特征,二者结合可实现特征数据处理的优势互补.GWO以其出色的全局搜索能力有效避免陷入局部最优,确保模型参数的最优配置.考虑环境温度、入口温度、体积流速和裂隙开度4个输入参数预测渗流出口水温,引入3种常见的统计学指标评价模型性能,并对渗流传热过程中的时间相关性问题进行了预测.研究结果表明:对比近5年用于地热生产预测的机器学习模型,GWO-LSTM-MLP模型的预测结果最准确(R^(2)=0.989,RMSE=1.238,MAE=0.922),且GWO能够显著提高LSTM-MLP模型的预测效果,GWO参数优选后R^(2)值提高5.3%,RMSE值降低54.37%,MAE值降低60.53%.模型能准确预测渗流出口的稳态温度,其中最大绝对误差为0.8912℃,百分比误差为1.338%. 展开更多
关键词 增强型地热系统 对流换热实验 深度学习 长短期记忆网络 灰狼算法 时间序列数据
原文传递
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:2
4
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
基于Q-Learning长尾延迟优化的SSD-SMR写缓存策略研究
5
作者 刘健 章步镐 +4 位作者 方匡弛 刘宣锋 孙国道 梁荣华 梁浩然 《计算机工程》 北大核心 2026年第3期287-298,共12页
随着全球数据规模的不断增大,如何以低成本的方式有效提升数据的访问性能是存储系统面临的一项重要挑战,使用低延迟、高带宽的固态硬盘(SSD)和低成本、高存储密度的叠瓦式磁盘(SMR)来构建缓存系统,成为一种有效的解决方案。但是,SMR固... 随着全球数据规模的不断增大,如何以低成本的方式有效提升数据的访问性能是存储系统面临的一项重要挑战,使用低延迟、高带宽的固态硬盘(SSD)和低成本、高存储密度的叠瓦式磁盘(SMR)来构建缓存系统,成为一种有效的解决方案。但是,SMR固有的机械运动和多磁道堆叠的特性导致其写性能较差,SSD中的脏数据频繁写回SMR所导致的大量读-合并-写(RMW)操作可能会引起严重的长尾延迟现象。为此,基于SSD-SMR混合存储架构提出一种结合强化学习Q-Learning算法的缓存替换优化策略。通过学习SMR设备的I/O负载状况与延迟之间的经验知识来控制对SMR的写入,当SMR负载较大时,通过控制缓存中脏数据的逐出来减少SMR因写回而产生的大量RMW操作,从而优化系统在不同负载下的尾部延迟开销。将Q-Learning算法与基于数据流行度的缓存算法LRU以及SMR感知的缓存算法SAC进行结合,使用真实企业Trace和YCSB生成的模拟Trace进行测试,实验结果表明,所提方法能够有效提升现有缓存算法的性能,可以降低57.06%的平均延迟和87.49%的尾部延迟。 展开更多
关键词 Q-learning算法 I/O负载 长尾延迟 缓存替换算法 混合存储
在线阅读 下载PDF
A Literature Review on Model Conversion, Inference, and Learning Strategies in EdgeML with TinyML Deployment
6
作者 Muhammad Arif Muhammad Rashid 《Computers, Materials & Continua》 2025年第4期13-64,共52页
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’... Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors. 展开更多
关键词 Edge machine learning tiny machine learning model compression INFERENCE learning algorithms
在线阅读 下载PDF
Exploring the Effectiveness of Machine Learning and Deep Learning Algorithms for Sentiment Analysis:A Systematic Literature Review
7
作者 Jungpil Shin Wahidur Rahman +5 位作者 Tanvir Ahmed Bakhtiar Mazrur Md.Mohsin Mia Romana Idress Ekfa Md.Sajib Rana Pankoo Kim 《Computers, Materials & Continua》 2025年第9期4105-4153,共49页
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi... Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management. 展开更多
关键词 Natural Language Processing(NLP) Machine learning(ml) sentiment analysis deep learning textual data
在线阅读 下载PDF
Methodology for Detecting Non-Technical Energy Losses Using an Ensemble of Machine Learning Algorithms
8
作者 Irbek Morgoev Roman Klyuev Angelika Morgoeva 《Computer Modeling in Engineering & Sciences》 2025年第5期1381-1399,共19页
Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of... Non-technical losses(NTL)of electric power are a serious problem for electric distribution companies.The solution determines the cost,stability,reliability,and quality of the supplied electricity.The widespread use of advanced metering infrastructure(AMI)and Smart Grid allows all participants in the distribution grid to store and track electricity consumption.During the research,a machine learning model is developed that allows analyzing and predicting the probability of NTL for each consumer of the distribution grid based on daily electricity consumption readings.This model is an ensemble meta-algorithm(stacking)that generalizes the algorithms of random forest,LightGBM,and a homogeneous ensemble of artificial neural networks.The best accuracy of the proposed meta-algorithm in comparison to basic classifiers is experimentally confirmed on the test sample.Such a model,due to good accuracy indicators(ROC-AUC-0.88),can be used as a methodological basis for a decision support system,the purpose of which is to form a sample of suspected NTL sources.The use of such a sample will allow the top management of electric distribution companies to increase the efficiency of raids by performers,making them targeted and accurate,which should contribute to the fight against NTL and the sustainable development of the electric power industry. 展开更多
关键词 Non-technical losses smart grid machine learning electricity theft FRAUD ensemble algorithm hybrid method forecasting classification supervised learning
在线阅读 下载PDF
基于随机森林与Q-learning融合的多元电力数据存储优化决策方法
9
作者 叶学顺 贾东梨 +2 位作者 周俊 唐英 贾梓豪 《科学技术与工程》 北大核心 2026年第3期1065-1074,共10页
大规模和多样的电力数据存储面临效率低和内存容量不足的瓶颈问题。数据索引和数据压缩等传统数据存储优化方法各有优劣势,如何有效应用于电力数据存储是目前研究的难点。为了解决这个问题,提出了一种融合随机森林和Q-learning的多元电... 大规模和多样的电力数据存储面临效率低和内存容量不足的瓶颈问题。数据索引和数据压缩等传统数据存储优化方法各有优劣势,如何有效应用于电力数据存储是目前研究的难点。为了解决这个问题,提出了一种融合随机森林和Q-learning的多元电力数据存储优化决策方法。该方法中的关键技术包括:首先提出了基于改进随机森林算法的存储优化策略决策模型,引入信息增益方法,综合评价数据存储时对数据库的数据访问频率、查询时间、存储速度以及数据冗余率等因素影响,做出数据直接存储、数据索引存储和数据压缩存储的存储优化方法策略决策;其次提出了基于改进Q-learning算法的数据存储算法决策模型,引入多尺度学习机制、优先经验放回机制和正负向奖励机制,决策数据索引存储时适用的索引算法以及数据压缩存储时适用的数据压缩算法。本方法有效融合了数据索引与数据压缩的技术优势,大幅提升数据存储效率并节约存储空间,为大规模多元电力数据管理提供新的解决方案。 展开更多
关键词 随机森林算法 Q-learning算法 数据存储优化方法 数据索引算法 数据压缩算法
在线阅读 下载PDF
Neuromorphic devices assisted by machine learning algorithms
10
作者 Ziwei Huo Qijun Sun +4 位作者 Jinran Yu Yichen Wei Yifei Wang Jeong Ho Cho Zhong Lin Wang 《International Journal of Extreme Manufacturing》 2025年第4期178-215,共38页
Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decisio... Neuromorphic computing extends beyond sequential processing modalities and outperforms traditional von Neumann architectures in implementing more complicated tasks,e.g.,pattern processing,image recognition,and decision making.It features parallel interconnected neural networks,high fault tolerance,robustness,autonomous learning capability,and ultralow energy dissipation.The algorithms of artificial neural network(ANN)have also been widely used because of their facile self-organization and self-learning capabilities,which mimic those of the human brain.To some extent,ANN reflects several basic functions of the human brain and can be efficiently integrated into neuromorphic devices to perform neuromorphic computations.This review highlights recent advances in neuromorphic devices assisted by machine learning algorithms.First,the basic structure of simple neuron models inspired by biological neurons and the information processing in simple neural networks are particularly discussed.Second,the fabrication and research progress of neuromorphic devices are presented regarding to materials and structures.Furthermore,the fabrication of neuromorphic devices,including stand-alone neuromorphic devices,neuromorphic device arrays,and integrated neuromorphic systems,is discussed and demonstrated with reference to some respective studies.The applications of neuromorphic devices assisted by machine learning algorithms in different fields are categorized and investigated.Finally,perspectives,suggestions,and potential solutions to the current challenges of neuromorphic devices are provided. 展开更多
关键词 neuromorphic devices machine learning algorithms artificial synapses MEMRISTORS field-effect transistors
在线阅读 下载PDF
A Comparison among Different Machine Learning Algorithms in Land Cover Classification Based on the Google Earth Engine Platform: The Case Study of Hung Yen Province, Vietnam
11
作者 Le Thi Lan Tran Quoc Vinh Phạm Quy Giang 《Journal of Environmental & Earth Sciences》 2025年第1期132-139,共8页
Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status ... Based on the Google Earth Engine cloud computing data platform,this study employed three algorithms including Support Vector Machine,Random Forest,and Classification and Regression Tree to classify the current status of land covers in Hung Yen province of Vietnam using Landsat 8 OLI satellite images,a free data source with reasonable spatial and temporal resolution.The results of the study show that all three algorithms presented good classification for five basic types of land cover including Rice land,Water bodies,Perennial vegetation,Annual vegetation,Built-up areas as their overall accuracy and Kappa coefficient were greater than 80%and 0.8,respectively.Among the three algorithms,SVM achieved the highest accuracy as its overall accuracy was 86%and the Kappa coefficient was 0.88.Land cover classification based on the SVM algorithm shows that Built-up areas cover the largest area with nearly 31,495 ha,accounting for more than 33.8%of the total natural area,followed by Rice land and Perennial vegetation which cover an area of over 30,767 ha(33%)and 15,637 ha(16.8%),respectively.Water bodies and Annual vegetation cover the smallest areas with 8,820(9.5%)ha and 6,302 ha(6.8%),respectively.The results of this study can be used for land use management and planning as well as other natural resource and environmental management purposes in the province. 展开更多
关键词 Google Earth Engine Land Cover LANDSAT Machine learning Algorithm
在线阅读 下载PDF
Optimization Algorithms Based on Double-Integral Coevolutionary Neurodynamics in Deep Learning
12
作者 Dan Su Jie Han +1 位作者 Chunhua Yang Weihua Gui 《IEEE/CAA Journal of Automatica Sinica》 2025年第6期1236-1245,共10页
Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable ... Deep neural networks are increasingly exposed to attack threats,and at the same time,the need for privacy protection is growing.As a result,the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing.Training neural networks under privacy constraints is one way to minimize privacy leakage,and one way to do this is to add noise to the data or model.However,noise may cause gradient directions to deviate from the optimal trajectory during training,leading to unstable parameter updates,slow convergence,and reduced model generalization capability.To overcome these challenges,we propose an optimization algorithm based on double-integral coevolutionary neurodynamics(DICND),designed to accelerate convergence and improve generalization in noisy conditions.Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions.Numerical simulations and image classification experiments further confirm the DICND algorithm's significant advantages in enhancing generalization performance. 展开更多
关键词 Coevolutionary neurodynamics(CND) deep learning GENERALIZATION noise resistance optimization algorithm
在线阅读 下载PDF
Reaction process optimization based on interpretable machine learning and metaheuristic optimization algorithms
13
作者 Dian Zhang Bo Ouyang Zheng-Hong Luo 《Chinese Journal of Chemical Engineering》 2025年第8期77-85,共9页
The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and u... The optimization of reaction processes is crucial for the green, efficient, and sustainable development of the chemical industry. However, how to address the problems posed by multiple variables, nonlinearities, and uncertainties during optimization remains a formidable challenge. In this study, a strategy combining interpretable machine learning with metaheuristic optimization algorithms is employed to optimize the reaction process. First, experimental data from a biodiesel production process are collected to establish a database. These data are then used to construct a predictive model based on artificial neural network (ANN) models. Subsequently, interpretable machine learning techniques are applied for quantitative analysis and verification of the model. Finally, four metaheuristic optimization algorithms are coupled with the ANN model to achieve the desired optimization. The research results show that the methanol: palm fatty acid distillate (PFAD) molar ratio contributes the most to the reaction outcome, accounting for 41%. The ANN-simulated annealing (SA) hybrid method is more suitable for this optimization, and the optimal process parameters are a catalyst concentration of 3.00% (mass), a methanol: PFAD molar ratio of 8.67, and a reaction time of 30 min. This study provides deeper insights into reaction process optimization, which will facilitate future applications in various reaction optimization processes. 展开更多
关键词 Reaction process optimization Interpretable machine learning Metaheuristic optimization algorithm BIODIESEL
在线阅读 下载PDF
A fully automated quantitative analysis method based on deep learning algorithms for immunohistochemical staining expression intensities
14
作者 Yongjian Deng Bojun Cai Xiaomei Wang 《Intelligent Oncology》 2025年第3期256-264,共9页
This paper focuses primarily on exploring the application of deep learning techniques and image processing algorithms in immunohistochemistry analysis,specifically targeting automated quantitative methods for nu-clear... This paper focuses primarily on exploring the application of deep learning techniques and image processing algorithms in immunohistochemistry analysis,specifically targeting automated quantitative methods for nu-clear,membrane,and cytoplasmic expressions of animal cells in whole-slide images.Cell nuclei,membranes,and cytoplasm were precisely identified and quantified by employing optical density separation techniques to differentiate between hematoxylin and 3,3'-diaminobenzidine staining components in combination with the CellViT nuclear segmentation algorithm and the region growing algorithm.Experimental validation demon-strates that the proposed algorithm performs excellently in terms of accuracy and recall.Compared to traditional manual interpretation,this algorithm achieve greater accuracy in specific quantitative metrics. 展开更多
关键词 Deep learning Immunohistochemistry analysis Image processing algorithm Optical density separation Quantification of whole-slide images
在线阅读 下载PDF
Study on Machine Learning-based Prediction of Compressive Strength of Concrete with Different Waste Glass Powder Contents
15
作者 YU Daidong MA Yuwei +3 位作者 LI Gang WANG Aiqin HUANG Wei WANG Jingchao 《材料导报》 北大核心 2026年第6期111-125,共15页
The application and promotion of waste glass powder concrete(WGPC)cansignificantly alleviate the pressure of concrete material scarcity and environmental pollution.Compressive strength(CS)is a critical parameter for e... The application and promotion of waste glass powder concrete(WGPC)cansignificantly alleviate the pressure of concrete material scarcity and environmental pollution.Compressive strength(CS)is a critical parameter for evaluating the efficacy of WGPC.Unlike conventional testing methods,machine learning techniques offer precise and reliable predictions of concrete’s compressive strength,especially in its long-term mechanical properties.In this work,four models,namely Multiple Linear Regression(MLR),Back Propagation Neural Network(BPNN),Support Vector Regression(SVR),and Random Forest Regression(RFR)were employed.Furthermore,particle swarm optimization(PSO)algorithm and cross-validation techniques were applied to fine-tune the model parameters,striving for peak prediction performance.The results indicated that optimized models generally exhibit enhanced predictive accuracy compared to their basic counterparts.Notably,the PSO-RFR model excels among all evaluated models,showcasing superior performance on the testing dataset.It achieves a coefficient of determination(R^(2))of 0.9231,a mean absolute error(MAE)of 2.1073,and a root mean square error(RMSE)of 3.6903.When compared to experimental results,the PSO-RFR and PSO-BPNN models demonstrate exceptional predictive accuracy.Notably,the PSO-BPNN model exhibits the closest R^(2)values between its training and test sets.This close alignment of R^(2)values between the training and testing sets reflects the PSO-BPNN model’s superior generalization ability for unseen data.The findings present an efficient method for predicting concrete’s compressive strength,contributing to the sustainable development of concrete materials,and providing theoretical support for their research and application. 展开更多
关键词 waste glass powder concrete compressive strength machine learning particle swarm optimization algorithm VISUALIZATION
在线阅读 下载PDF
基于Q-Learning的多模态自适应光伏功率优化组合预测
16
作者 隗知初 杨苹 +3 位作者 周钱雨凡 陈文皓 万思洋 崔嘉雁 《电力工程技术》 北大核心 2026年第1期115-124,163,共11页
针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模... 针对光伏功率序列波动性强、随机性高的问题,文中提出一种基于Q-Learning的多模态自适应光伏功率优化组合预测模型。首先,采用鲸鱼优化算法的变分模态分解方法,将原始光伏功率序列分解成不同子模态,并通过集成特征筛选模型,确定各子模态序列最敏感的气象因素。然后,构建反向传播神经网络、双向长短期记忆网络、门控循环单元网络和时间卷积网络4种基础预测模型。考虑到不同模型对不同频率特征的子序列预测能力不同,利用Q-Learning算法自适应选择各模态对应的最优基础模型组合方式。最后,将不同子模态的预测结果叠加重构,得到最终预测结果,并利用高分辨率光伏气象功率数据集进行验证。结果证明,文中所提出的基于Q-Learning的多模态自适应光伏功率优化组合预测模型,相较于单一模型的预测误差平均绝对误差下降了16.18%,均方误差下降了17.00%。 展开更多
关键词 鲸鱼优化算法 变分模态分解 Q-learning 功率预测 组合模型 光伏发电
在线阅读 下载PDF
Empirical tropospheric zenith wet delay models with strong generalization capability based on a robust machine learning fusion algorithm
17
作者 Jiahao Zhang Qin Liang Yunqing Huang 《Geodesy and Geodynamics》 2026年第2期211-224,共14页
Tropospheric zenith wet delay(ZWD)plays a vital role in the analysis of space geodetic observations.In recent years,machine learning methods have been increasingly applied to improve the accuracy of ZWD calculations.H... Tropospheric zenith wet delay(ZWD)plays a vital role in the analysis of space geodetic observations.In recent years,machine learning methods have been increasingly applied to improve the accuracy of ZWD calculations.However,a single machine learning model has limited generalization capabilities.To address these limitations,this study introduces a novel machine learning fusion(MLF)algorithm with stronger generalization capabilities to enhance ZWD modeling and prediction accuracy.The MLF algorithm utilizes a two-layer structure integrating extra trees(ET),backpropagation neural network(BPNN),and linear regression models.By comparing the root mean square error(RMSE)of these models,we found that both ET-based and MLF-based models outperform RF-based and BPNN-based models in terms of internal and external accuracy,across both surface meteorological data-based and blind models.The improvement in exte rnal accuracy is particularly significant in the blind models.Our re sults show that the MLF(with an RMSE of 3.93 cm)and ET(3.99 cm)models outperform the traditional GPT3model(4.07 cm),while the RF(4.21 cm)and BPNN(4.14 cm)have worse external accuracies than the GPT3 model.It is worth noting that the BPNN suffered from overfitting during external accuracy tests,which was avoided by the MLF.In summary,regardless of the availability of surface meteorological data,the MLF-based empirical models demonstrate superior internal and external accuracy compared to the other tested models in this study. 展开更多
关键词 Tropospheric zenith wet delay Machine learning Extra trees Machine learning fusion algorithm Empirical models
原文传递
Development and validation of machine learningbased in-hospital mortality predictive models for acute aortic syndrome in emergency departments
18
作者 Yuanwei Fu Yilan Yang +6 位作者 Hua Zhang Daidai Wang Qiangrong Zhai Lanfang Du Nijiati Muyesai YanxiaGao Qingbian Ma 《World Journal of Emergency Medicine》 2026年第1期43-49,共7页
BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suita... BACKGROUND:This study aims to develop and validate a machine learning-based in-hospital mortality predictive model for acute aortic syndrome(AAS)in the emergency department(ED)and to derive a simplifi ed version suitable for rapid clinical application.METHODS:In this multi-center retrospective cohort study,AAS patient data from three hospitals were analyzed.The modeling cohort included data from the First Affiliated Hospital of Zhengzhou University and the People’s Hospital of Xinjiang Uygur Autonomous Region,with Peking University Third Hospital data serving as the external test set.Four machine learning algorithms—logistic regression(LR),multilayer perceptron(MLP),Gaussian naive Bayes(GNB),and random forest(RF)—were used to develop predictive models based on 34 early-accessible clinical variables.A simplifi ed model was then derived based on fi ve key variables(Stanford type,pericardial eff usion,asymmetric peripheral arterial pulsation,decreased bowel sounds,and dyspnea)via Least Absolute Shrinkage and Selection Operator(LASSO)regression to improve ED applicability.RESULTS:A total of 929 patients were included in the modeling cohort,and 210 were included in the external test set.Four machine learning models based on 34 clinical variables were developed,achieving internal and external validation AUCs of 0.85-0.90 and 0.73-0.85,respectively.The simplifi ed model incorporating fi ve key variables demonstrated internal and external validation AUCs of 0.71-0.86 and 0.75-0.78,respectively.Both models showed robust calibration and predictive stability across datasets.CONCLUSION:Both kinds of models were built based on machine learning tools,and proved to have certain prediction performance and extrapolation. 展开更多
关键词 Emergency department Acute aortic syndrome MORTALITY Predictive model Machine learning algorithms
暂未订购
Synergistic machine learning and DFT screening strategy:Accelerating discovery of efficient perovskite passivators
19
作者 Jianghao Liu Hongyan Lv +4 位作者 Pengyang Wang Guofu Hou Ying Zhao Xiaodan Zhang Qian Huang 《Journal of Energy Chemistry》 2026年第1期56-63,I0003,共9页
Efficient surface passivation is critical for achieving high-performance perovskite solar cells(PSCs),yet the discovery of optimal passivators remains a time-consuming,trial-and-error process.Here,we report a synergis... Efficient surface passivation is critical for achieving high-performance perovskite solar cells(PSCs),yet the discovery of optimal passivators remains a time-consuming,trial-and-error process.Here,we report a synergistic machine learning(ML)and density functional theory(DFT)approach that enables predictive and rapid identification of effective passivation materials.By training an XGBoost model(91.3%accuracy)with DFT-derived molecular descriptors and activity calculations,we identify 2-(4-aminophenyl)-3H-benzimidazol-5-amine(APBIA)as a promising passivator.Experimental validation demonstrates that APBIA effectively removes surface impurities and passivates defects within perovskite films,leading to a significant increase in power conversion efficiency(PCE)from 22.48%to 25.55%(certified as 25.02%).This ML-DFT framework provides a generalizable pathway for accelerating the development of advanced functional materials for photovoltaic applications. 展开更多
关键词 Perovskite solar cells Machine learning(ml) Density functional theory(DFT) Passivators Organic molecule
在线阅读 下载PDF
A dual attention-based deep learning model for lithology identificationwhile drilling
20
作者 Jie Chen Zhen Gui +6 位作者 Yichao Rui Xusheng Zhao Xiaokang Pan Qingfeng Wang Yuanyuan Pu Zheng Li Maoyi Liu 《Journal of Rock Mechanics and Geotechnical Engineering》 2026年第2期1177-1192,共16页
Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex ge... Lithology identificationwhile drilling technology can obtain rock information in real-time.However,traditional lithology identificationmodels often face limitations in feature extraction and adaptability to complex geological conditions,limiting their accuracy in challenging environments.To address these challenges,a deep learning model for lithology identificationwhile drilling is proposed.The proposed model introduces a dual attention mechanism in the long short-term memory(LSTM)network,effectively enhancing the ability to capture spatial and channel dimension information.Subsequently,the crayfishoptimization algorithm(COA)is applied to optimize the model network structure,thereby enhancing its lithology identificationcapability.Laboratory test results demonstrate that the proposed model achieves 97.15%accuracy on the testing set,significantlyoutperforming the traditional support vector machine(SVM)method(81.77%).Field tests under actual drilling conditions demonstrate an average accuracy of 91.96%for the proposed model,representing a 14.31%improvement over the LSTM model alone.The proposed model demonstrates robust adaptability and generalization ability across diverse operational scenarios.This research offers reliable technical support for lithology identification while drilling. 展开更多
关键词 Lithology identificationwhile drilling Deep learning Dual attention mechanism Metaheuristic algorithm Field applications
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部