期刊文献+
共找到230篇文章
< 1 2 12 >
每页显示 20 50 100
Recent innovation in benchmark rates (BMR):evidence from influential factors on Turkish Lira Overnight Reference Interest Rate with machine learning algorithms 被引量:2
1
作者 Öer Depren Mustafa Tevfik Kartal Serpil KılıçDepren 《Financial Innovation》 2021年第1期942-961,共20页
Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced... Some countries have announced national benchmark rates,while others have been working on the recent trend in which the London Interbank Offered Rate will be retired at the end of 2021.Considering that Turkey announced the Turkish Lira Overnight Reference Interest Rate(TLREF),this study examines the determinants of TLREF.In this context,three global determinants,five country-level macroeconomic determinants,and the COVID-19 pandemic are considered by using daily data between December 28,2018,and December 31,2020,by performing machine learning algorithms and Ordinary Least Square.The empirical results show that(1)the most significant determinant is the amount of securities bought by Central Banks;(2)country-level macroeconomic factors have a higher impact whereas global factors are less important,and the pandemic does not have a significant effect;(3)Random Forest is the most accurate prediction model.Taking action by considering the study’s findings can help support economic growth by achieving low-level benchmark rates. 展开更多
关键词 Benchmark rate Determinants Machine learning algorithms TURKEY
在线阅读 下载PDF
Adaptive Learning Rate Optimization BP Algorithm with Logarithmic Objective Function
2
作者 李春雨 盛昭瀚 《Journal of Southeast University(English Edition)》 EI CAS 1997年第1期47-51,共5页
This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic o... This paper presents an improved BP algorithm. The approach can reduce the amount of computation by using the logarithmic objective function. The learning rate μ(k) per iteration is determined by dynamic optimization method to accelerate the convergence rate. Since the determination of the learning rate in the proposed BP algorithm only uses the obtained first order derivatives in standard BP algorithm(SBP), the scale of computational and storage burden is like that of SBP algorithm,and the convergence rate is remarkably accelerated. Computer simulations demonstrate the effectiveness of the proposed algorithm 展开更多
关键词 BP algorithm ADAPTIVE learning rate optimization fault diagnosis logarithmic objective FUNCTION
在线阅读 下载PDF
融合Q-learning的A^(*)预引导蚁群路径规划算法
3
作者 殷笑天 杨丽英 +1 位作者 刘干 何玉庆 《传感器与微系统》 北大核心 2025年第8期143-147,153,共6页
针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素... 针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素,引导初始路径快速逼近最优解;其次,构建全局-局部双层信息素协同模型,利用全局层保留历史精英路径经验、局部层实时响应环境变化;最后,引入Q-learning方向性奖励函数优化决策过程,在路径拐点与障碍边缘施加强化引导信号。实验表明:在25×24中等复杂度地图中,QHACO算法较传统ACO算法最优路径缩短22.7%,收敛速度提升98.7%;在50×50高密度障碍环境中,最优路径长度优化16.9%,迭代次数减少95.1%。相比传统ACO算法,QHACO算法在最优性、收敛速度与避障能力上均有显著提升,展现出较强环境适应性。 展开更多
关键词 蚁群优化算法 路径规划 局部最优 收敛速度 Q-learning 分层信息素 A^(*)算法
在线阅读 下载PDF
Fast Learning in Spiking Neural Networks by Learning Rate Adaptation 被引量:2
4
作者 方慧娟 罗继亮 王飞 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2012年第6期1219-1224,共6页
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and de... For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN. 展开更多
关键词 spiking neural networks learning algorithm learning rate adaptation Tennessee Eastman process
在线阅读 下载PDF
Improved IChOA-Based Reinforcement Learning for Secrecy Rate Optimization in Smart Grid Communications
5
作者 Mehrdad Shoeibi Mohammad Mehdi Sharifi Nevisi +3 位作者 Sarvenaz Sadat Khatami Diego Martín Sepehr Soltani Sina Aghakhani 《Computers, Materials & Continua》 SCIE EI 2024年第11期2819-2843,共25页
In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open... In the evolving landscape of the smart grid(SG),the integration of non-organic multiple access(NOMA)technology has emerged as a pivotal strategy for enhancing spectral efficiency and energy management.However,the open nature of wireless channels in SG raises significant concerns regarding the confidentiality of critical control messages,especially when broadcasted from a neighborhood gateway(NG)to smart meters(SMs).This paper introduces a novel approach based on reinforcement learning(RL)to fortify the performance of secrecy.Motivated by the need for efficient and effective training of the fully connected layers in the RL network,we employ an improved chimp optimization algorithm(IChOA)to update the parameters of the RL.By integrating the IChOA into the training process,the RL agent is expected to learn more robust policies faster and with better convergence properties compared to standard optimization algorithms.This can lead to improved performance in complex SG environments,where the agent must make decisions that enhance the security and efficiency of the network.We compared the performance of our proposed method(IChOA-RL)with several state-of-the-art machine learning(ML)algorithms,including recurrent neural network(RNN),long short-term memory(LSTM),K-nearest neighbors(KNN),support vector machine(SVM),improved crow search algorithm(I-CSA),and grey wolf optimizer(GWO).Extensive simulations demonstrate the efficacy of our approach compared to the related works,showcasing significant improvements in secrecy capacity rates under various network conditions.The proposed IChOA-RL exhibits superior performance compared to other algorithms in various aspects,including the scalability of the NOMA communication system,accuracy,coefficient of determination(R2),root mean square error(RMSE),and convergence trend.For our dataset,the IChOA-RL architecture achieved coefficient of determination of 95.77%and accuracy of 97.41%in validation dataset.This was accompanied by the lowest RMSE(0.95),indicating very precise predictions with minimal error. 展开更多
关键词 Smart grid communication secrecy rate optimization reinforcement learning improved chimp optimization algorithm
在线阅读 下载PDF
Accurate Machine Learning Predictions of Sci-Fi Film Performance
6
作者 Amjed Al Fahoum Tahani A.Ghobon 《Journal of New Media》 2023年第1期1-22,共22页
A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive researc... A groundbreaking method is introduced to leverage machine learn-ing algorithms to revolutionize the prediction of success rates for science fiction films.In the captivating world of the film industry,extensive research and accurate forecasting are vital to anticipating a movie’s triumph prior to its debut.Our study aims to harness the power of available data to estimate a film’s early success rate.With the vast resources offered by the internet,we can access a plethora of movie-related information,including actors,directors,critic reviews,user reviews,ratings,writers,budgets,genres,Facebook likes,YouTube views for movie trailers,and Twitter followers.The first few weeks of a film’s release are crucial in determining its fate,and online reviews and film evaluations profoundly impact its opening-week earnings.Hence,our research employs advanced supervised machine learning techniques to predict a film’s triumph.The Internet Movie Database(IMDb)is a comprehensive data repository for nearly all movies.A robust predictive classification approach is developed by employing various machine learning algorithms,such as fine,medium,coarse,cosine,cubic,and weighted KNN.To determine the best model,the performance of each feature was evaluated based on composite metrics.Moreover,the significant influences of social media platforms were recognized including Twitter,Instagram,and Facebook on shaping individuals’opinions.A hybrid success rating prediction model is obtained by integrating the proposed prediction models with sentiment analysis from available platforms.The findings of this study demonstrate that the chosen algorithms offer more precise estimations,faster execution times,and higher accuracy rates when compared to previous research.By integrating the features of existing prediction models and social media sentiment analysis models,our proposed approach provides a remarkably accurate prediction of a movie’s success.This breakthrough can help movie producers and marketers anticipate a film’s triumph before its release,allowing them to tailor their promotional activities accordingly.Furthermore,the adopted research lays the foundation for developing even more accurate prediction models,considering the ever-increasing significance of social media platforms in shaping individ-uals’opinions.In conclusion,this study showcases the immense potential of machine learning algorithms in predicting the success rate of science fiction films,opening new avenues for the film industry. 展开更多
关键词 Film success rate prediction optimized feature selection robust machine learning nearest neighbors’ algorithms
在线阅读 下载PDF
Application of Random Search Methods in the Determination of Learning Rate for Training Container Dwell Time Data Using Artificial Neural Networks
7
作者 Justice Awosonviri Akodia Clement K. Dzidonu +1 位作者 David King Boison Philip Kisembe 《Intelligent Control and Automation》 2024年第4期109-124,共16页
Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for ... Purpose: This study aimed to enhance the prediction of container dwell time, a crucial factor for optimizing port operations, resource allocation, and supply chain efficiency. Determining an optimal learning rate for training Artificial Neural Networks (ANNs) has remained a challenging task due to the diverse sizes, complexity, and types of data involved. Design/Method/Approach: This research used a RandomizedSearchCV algorithm, a random search approach, to bridge this knowledge gap. The algorithm was applied to container dwell time data from the TOS system of the Port of Tema, which included 307,594 container records from 2014 to 2022. Findings: The RandomizedSearchCV method outperformed standard training methods both in terms of reducing training time and improving prediction accuracy, highlighting the significant role of the constant learning rate as a hyperparameter. Research Limitations and Implications: Although the study provides promising outcomes, the results are limited to the data extracted from the Port of Tema and may differ in other contexts. Further research is needed to generalize these findings across various port systems. Originality/Value: This research underscores the potential of RandomizedSearchCV as a valuable tool for optimizing ANN training in container dwell time prediction. It also accentuates the significance of automated learning rate selection, offering novel insights into the optimization of container dwell time prediction, with implications for improving port efficiency and supply chain operations. 展开更多
关键词 Container Dwell Time Prediction Artificial Neural Networks (ANNs) learning rate Optimization RandomizedSearchCV algorithm and Port Operations Efficiency
在线阅读 下载PDF
Ensemble prediction modeling of flotation recovery based on machine learning 被引量:1
8
作者 Guichun He Mengfei Liu +1 位作者 Hongyu Zhao Kaiqi Huang 《International Journal of Mining Science and Technology》 SCIE EI CAS CSCD 2024年第12期1727-1740,共14页
With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high ... With the rise of artificial intelligence(AI)in mineral processing,predicting the flotation indexes has attracted significant research attention.Nevertheless,current prediction models suffer from low accuracy and high prediction errors.Therefore,this paper utilizes a two-step procedure.First,the outliers are pro-cessed using the box chart method and filtering algorithm.Then,the decision tree(DT),support vector regression(SVR),random forest(RF),and the bagging,boosting,and stacking integration algorithms are employed to construct a flotation recovery prediction model.Extensive experiments compared the prediction accuracy of six modeling methods on flotation recovery and delved into the impact of diverse base model combinations on the stacking model’s prediction accuracy.In addition,field data have veri-fied the model’s effectiveness.This study demonstrates that the stacking ensemble approaches,which uses ten variables to predict flotation recovery,yields a more favorable prediction effect than the bagging ensemble approach and single models,achieving MAE,RMSE,R2,and MRE scores of 0.929,1.370,0.843,and 1.229%,respectively.The hit rates,within an error range of±2%and±4%,are 82.4%and 94.6%.Consequently,the prediction effect is relatively precise and offers significant value in the context of actual production. 展开更多
关键词 Machine learning STACKING BAGGING Flotation recovery rate Filtering algorithm
在线阅读 下载PDF
ONLINE REGULARIZED GENERALIZED GRADIENT CLASSIFICATION ALGORITHMS
9
作者 Leilei Zhang Baohui Sheng Jianli Wang 《Analysis in Theory and Applications》 2010年第3期278-300,共23页
This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence o... This paper considers online classification learning algorithms for regularized classification schemes with generalized gradient. A novel capacity independent approach is presented. It verifies the strong convergence of sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Compared with the gradient schemes, this al- gorithm needs only less additional assumptions on the loss function and derives a stronger result with respect to the choice of step sizes and the regularization parameters. 展开更多
关键词 online learning algorithm reproducing kernel Hilbert space generalized gra-dient Clarke's directional derivative learning rate
在线阅读 下载PDF
Research on three-step accelerated gradient algorithm in deep learning
10
作者 Yongqiang Lian Yincai Tang Shirong Zhou 《Statistical Theory and Related Fields》 2022年第1期40-57,共18页
Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we giv... Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies. 展开更多
关键词 Accelerated algorithm backpropagation deep learning learning rate MOMENTUM stochastic gradient descent
原文传递
Application of Evolutionary Algorithm for Optimal Directional Overcurrent Relay Coordination
11
作者 N. M. Stenane K. A. Folly 《Journal of Computer and Communications》 2014年第9期103-111,共9页
In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent re... In this paper, two Evolutionary Algorithms (EAs) i.e., an improved Genetic Algorithms (GAs) and Population Based Incremental Learning (PBIL) algorithm are applied for optimal coordination of directional overcurrent relays in an interconnected power system network. The problem of coordinating directional overcurrent relays is formulated as an optimization problem that is solved via the improved GAs and PBIL. The simulation results obtained using the improved GAs are compared with those obtained using PBIL. The results show that the improved GA proposed in this paper performs better than PBIL. 展开更多
关键词 EVOLUTIONARY algorithmS GA learning rate OPTIMAL RELAY COORDINATION PBIL
在线阅读 下载PDF
Improving the accuracy of heart disease diagnosis with an augmented back propagation algorithm
12
作者 颜红梅 《Journal of Chongqing University》 CAS 2003年第1期31-34,共4页
A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale ... A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease, rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis. A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis. 展开更多
关键词 multilayer perceptron back propagation algorithm heart disease momentum term adaptive learning rate the forgetting mechanics conjugate gradients method
暂未订购
基于改进BP神经网络的传感网云入侵行为检测
13
作者 原锦明 耿小芬 那崇正 《控制工程》 北大核心 2025年第11期2105-2112,共8页
传感网云入侵检测时易受到交互信息节点能量消耗不均衡的影响,使部分节点的性能下降,进而导致入侵行为检测的准确性下降。对此,提出基于改进BP神经网络的传感网云入侵行为检测方法。首先,利用稀疏投影数据算法对传感网云稀疏投影数据进... 传感网云入侵检测时易受到交互信息节点能量消耗不均衡的影响,使部分节点的性能下降,进而导致入侵行为检测的准确性下降。对此,提出基于改进BP神经网络的传感网云入侵行为检测方法。首先,利用稀疏投影数据算法对传感网云稀疏投影数据进行采集。然后,利用稀疏表示基学习方法针对采集到的数据进行稀疏表示,以此得到具有时空关联性的传感网云数据特征。最后,通过自适应调整学习率和求和累加改进神经网络,将传感网云数据的特征数据作为网络输入,实现传感网云入侵检测。通过实验证明,所提方法的识别率达到了96.7%以上,检测速度仅为34 ms,均值波动系数低于0.20,CPU使用率最高时仅为14%,具备较好的入侵检测性能。 展开更多
关键词 稀疏投影数据 传感网 云入侵 检测算法 神经网络 自适应学习率
原文传递
多目标优化算法在机械比能与机械钻速耦合优化中的应用
14
作者 刘伟吉 张家辉 祝效华 《石油钻采工艺》 北大核心 2025年第3期265-276,共12页
提高钻井效率对于降低成本和提升能源开采速度至关重要。建立了以机械比能MSE和机械钻速ROP为目标函数的多目标耦合优化模型,旨在通过深度学习预测和智能算法优化,提升钻井效率。首先,评估了多种深度学习架构,选定融合卷积神经网络CNN... 提高钻井效率对于降低成本和提升能源开采速度至关重要。建立了以机械比能MSE和机械钻速ROP为目标函数的多目标耦合优化模型,旨在通过深度学习预测和智能算法优化,提升钻井效率。首先,评估了多种深度学习架构,选定融合卷积神经网络CNN、双向门控循环单元BiGRU与注意力机制Attention的CNN-BiGRU-Attention模型对MSE和ROP进行预测。随后,构建了上述多目标耦合优化模型,并采用非支配排序遗传算法Ⅱ(NSGA-Ⅱ)、强度Pareto进化算法2(SPEA2)和参考向量引导进化算RVEA共3种优化算法来求解模型。在限定ROP最小值分别为当前深度下原始ROP值的50%、70%、90%以及原始ROP值的不同条件下,对比分析了3种算法的优化性能,结果显示RVEA算法表现最佳。为了更贴近工程实际,进一步引入扭矩约束并建立相应的深度学习模型,考察转速和钻压调整对扭矩的影响。实验结果表明,即使加入扭矩约束,RVEA算法仍能有效优化MSE和ROP。所提出的方法不仅确定了不同ROP限定条件下的最优MSE降低与ROP增加策略,还为钻井工程参数优化提供了实用的理论依据和决策支持。 展开更多
关键词 机械比能 机械钻速 多目标优化 参考向量引导进化算法 钻井优化 深度学习 扭矩约束
在线阅读 下载PDF
基于多重信息融合分析的图书动态自组织分类算法
15
作者 窦淑庆 刘思豆 《现代电子技术》 北大核心 2025年第11期169-173,共5页
为提高图书资源管理的智能化水平以及个性化服务的精准度,文中提出一种基于深度学习和多重信息融合分析的图书馆动态自组织分类算法。在构建数据感知与处理基本架构的基础上,引入深度学习算法对各类数据中的海量信息进行快速分析与感知... 为提高图书资源管理的智能化水平以及个性化服务的精准度,文中提出一种基于深度学习和多重信息融合分析的图书馆动态自组织分类算法。在构建数据感知与处理基本架构的基础上,引入深度学习算法对各类数据中的海量信息进行快速分析与感知,同时对感知后的数据进行动态分类,从而实现大规模数据的智能化处理。基于深度学习算法,引入多重信息融合技术,对各类数据的多种信息进行有效识别与融合,实现对读者行为和偏好的精准捕捉,为图书资源的优化管理提供了技术解决方案。为了验证所提方法的正确性和有效性,设计了数值实验进行测试。实验结果表明,所提方法的数据分类准确率可达99.10%,能够满足大型图书馆的智能化数据管理与分类需求。 展开更多
关键词 图书资源管理 智能化水平 个性化服务 深度学习 多重信息融合分析 动态自组织分类算法 数据分类准确率
在线阅读 下载PDF
基于NRBO-XGBoost和ABKDE融合可解释模型的TBM掘进速度预测
16
作者 杨腾杰 高新强 +4 位作者 杨志国 孔超 董北毅 李铁峰 朱正国 《河南科技大学学报(自然科学版)》 北大核心 2025年第4期73-87,M0006,M0007,共17页
精准可靠的隧道掘进机(TBM)掘进速度预测对于提升施工效率、保障施工安全具有重要工程价值。针对现有TBM掘进速度预测模型精度较差和对施工时不确定性考虑不足的局限,提出了一种基于机器学习的可解释性TBM掘进速度区间预测方法。首先,... 精准可靠的隧道掘进机(TBM)掘进速度预测对于提升施工效率、保障施工安全具有重要工程价值。针对现有TBM掘进速度预测模型精度较差和对施工时不确定性考虑不足的局限,提出了一种基于机器学习的可解释性TBM掘进速度区间预测方法。首先,收集国内多组TBM隧道工程数据,选取岩石单轴抗压强度(UCS)、岩体完整性系数(Kv)、推力(TF)与刀盘转速(RPM)作为输入特征,构建基于牛顿拉弗森优化(NRBO)算法与交叉验证策略协同优化的极端梯度提升(XGBoost)点预测模型,并引入可解释性(SHAP)框架解析特征参数对预测结果的贡献度。进而,采用自适应带宽核密度估计(ABKDE)方法对点预测结果进行不确定性量化,实现掘进速度的区间预测。最后,通过伊朗克尔曼输水隧洞工程案例验证模型有效性。研究结果表明:与未采用NRBO算法的XGBoost模型相比,NRBO-XGBoost模型的预测误差均方误差(MSE)、平均绝对误差(MAE)和平均绝对百分比误差(MAPE)分别降低了13.9%、19.1%和0.7%,决定系数R2提高了0.0151;特征重要性排序为UCS(0.4156)>TF(0.1554)>RPM(0.1045)>Kv(0.0047),揭示岩石强度为掘进速度的主导影响因素;所提模型在区间预测性能上超越了自适应提升(AdaBoost)和随机森林(RF)模型,NRBO-XGBoost、AdaBoost和RF模型的预测区间覆盖概率(PICP)分别达到92.1%、88.4%和90.2%,具备更优的不确定性量化能力;工程实例验证中,点预测R2达0.9676且预测区间完全覆盖实测值,证实模型具有良好工程适用性。 展开更多
关键词 隧道掘进机 掘进速度 区间预测 融合模型 机器学习 NRBO算法 可解释性
在线阅读 下载PDF
运用拉丁超立方抽样和机器学习预测页岩气集输管道弯管冲蚀
17
作者 刘立杰 徐涛龙 +3 位作者 王荡 Abouba Ibrahim Mahamadou 李又绿 蒋宏业 《管道保护》 2025年第2期34-42,61,共10页
为了研究页岩气集输系统中混砂引起的弯管冲蚀问题,从颗粒特性和管道特性中选取5个参数作为特征输入,将最大冲蚀速率作为预测输出。通过拉丁超立方抽样(latin hypercube sampling,LHS)和Fluent模拟得到数据集,比较不同机器学习模型的预... 为了研究页岩气集输系统中混砂引起的弯管冲蚀问题,从颗粒特性和管道特性中选取5个参数作为特征输入,将最大冲蚀速率作为预测输出。通过拉丁超立方抽样(latin hypercube sampling,LHS)和Fluent模拟得到数据集,比较不同机器学习模型的预测精度。研究结果表明,粒子群算法(particle swarm optimization,PSO)优化支持向量机(support vector machines,SVM)的PSO-SVM模型是最优模型。在测试集中,该模型的平均绝对误差和均方根误差分别为4.85994×10^(-5)和5.0603×10^(-5),决定系数为0.98,与试验结果相比,其预测相对误差仅为14.84%。夏普利加性解释(shapley additive explanations,SHAP)显示,最大冲蚀速率的影响因素按贡献度从高到低依次为颗粒质量流量、颗粒速度、颗粒粒径、管径和弯径比。 展开更多
关键词 冲蚀速率预测 拉丁超立方抽样 Fluent模拟 机器学习 优化算法 SHAP
在线阅读 下载PDF
面向无人机多状态模糊推理的Q学习路由机制研究 被引量:1
18
作者 刘星宇 强楠楠 付银娟 《信息技术》 2025年第6期17-22,29,共7页
针对无人机网络拓扑高动态变化,路由空洞等问题,设计一种基于多状态模糊推理的Q学习(State Fuzzy Reasoning Q-learing Routing Algorithm,SFR-QR)路由机制。该机制首先利用无人机本身相对距离、相对方向、相对角度特点进行模糊推理,形... 针对无人机网络拓扑高动态变化,路由空洞等问题,设计一种基于多状态模糊推理的Q学习(State Fuzzy Reasoning Q-learing Routing Algorithm,SFR-QR)路由机制。该机制首先利用无人机本身相对距离、相对方向、相对角度特点进行模糊推理,形成Q学习的运动反馈奖励,接着结合无人机网络的链路质量以及转发消耗的能量来修正最优路由策略,并实现相应的仿真。仿真结果表明,该SFR-QR路由算法不仅比只考虑一个状态约束的DQR、LQR在平均网络时延方面提高了0.03秒,分组传送成功率提高了1%,以及链路的稳定性方面提高了0.005,而且更适用于3维无人机网络的通信需求。 展开更多
关键词 无人机自组网 强化学习 奖励函数 学习率 路由算法
在线阅读 下载PDF
IPMC率相关迟滞特性迁移学习建模
19
作者 白亭亭 孟江 +2 位作者 王格 曹凤蓉 安坤 《机械设计与制造工程》 2025年第3期116-121,共6页
针对离子聚合物金属复合物(IPMC)致动器在低频交流驱动电压下具有明显的率相关迟滞非线性,提出一种卷积神经网络(CNN)结合基于模型迁移学习的率相关迟滞建模方法。首先搭建实验平台,在0.1~3.0 Hz、2.0~5.5 V线性递增正弦驱动电压下,测... 针对离子聚合物金属复合物(IPMC)致动器在低频交流驱动电压下具有明显的率相关迟滞非线性,提出一种卷积神经网络(CNN)结合基于模型迁移学习的率相关迟滞建模方法。首先搭建实验平台,在0.1~3.0 Hz、2.0~5.5 V线性递增正弦驱动电压下,测得电压-位移实验数据,构建CNN模型对足量源域数据进行预测,利用粒子群优化算法对CNN模型关键参数进行寻优,得到预训练模型;然后采用基于模型的迁移学习方法,利用少量目标域数据,对预训练模型中的部分网络参数进行微调;最终获得可以对不同频率目标域数据有着良好预测能力的新模型,从而实现在目标域数据稀少情况下的率相关迟滞建模。通过该模型与LSTM、GRU模型进行对比验证,结果显示CNN迁移学习模型在不同频率下迁移精度均在98.23%以上,平均训练时间缩短46.1%。 展开更多
关键词 率相关迟滞特性 卷积神经网络 粒子群优化算法 迁移学习 微调
在线阅读 下载PDF
基于DBSDER-QL算法的应急物资分配策略
20
作者 杨皓 张池军 张辛未 《吉林大学学报(理学版)》 北大核心 2025年第4期1105-1116,共12页
针对自然灾害应急物资分配的问题,提出一种基于动态Boltzmann Softmax(DBS)和动态探索率(DER)的Q-learning算法(dynamic Boltzmann Softmax and dynamic exploration rate based-Q-learning,DBSDER-QL).首先,采用动态Boltzmann Softmax... 针对自然灾害应急物资分配的问题,提出一种基于动态Boltzmann Softmax(DBS)和动态探索率(DER)的Q-learning算法(dynamic Boltzmann Softmax and dynamic exploration rate based-Q-learning,DBSDER-QL).首先,采用动态Boltzmann Softmax策略,通过动态调整动作价值的权重,促进算法的稳定收敛,解决了最大运算符的过度贪婪问题.其次,采用动态探索率策略提高算法的收敛性和稳定性,解决了固定探索率Q-learning算法在训练后期无法完全收敛到最优策略的问题.最后,通过消融实验验证了DBS和DER策略的有效性.与动态规划算法、贪心算法及传统Q-learning算法进行对比的实验结果表明,DBSDER-QL算法在总成本和计算效率方面均明显优于传统方法,展现了更高的适用性和有效性. 展开更多
关键词 物资分配 强化学习 Q-learning算法 动态探索率 动态Boltzmann Softmax
在线阅读 下载PDF
上一页 1 2 12 下一页 到第
使用帮助 返回顶部