期刊文献+
共找到283,010篇文章
< 1 2 250 >
每页显示 20 50 100
基于CNN-LSTM-Attention的中国省域交通运输业碳达峰预测研究
1
作者 杨青 江宇航 +3 位作者 吴婵媛 段召琳 陈梦柯 刘星星 《安全与环境学报》 北大核心 2025年第10期4064-4075,共12页
交通运输业减排是实现全局减排目标的关键。研究基于改进的随机性环境影响评估(Stochastic Impacts by Regression on Population,Affluence,and Technology,STIRPAT)模型分析影响交通运输业碳排放的主要因素,设置低碳、基准和高碳3种... 交通运输业减排是实现全局减排目标的关键。研究基于改进的随机性环境影响评估(Stochastic Impacts by Regression on Population,Affluence,and Technology,STIRPAT)模型分析影响交通运输业碳排放的主要因素,设置低碳、基准和高碳3种情景方案,利用卷积神经网络-长短期记忆网络-注意力机制(Convolutional Neural Networks-Long short-Term Memory-Attention Mec.hanism,CNN-LSTM-Attention)交通运输业碳排放预测模型对中国30个省、自治区、直辖市2022—2035年交通运输业碳排放进行预测。结果显示:人口情况、经济水平和交通运输等3个维度的影响因素对交通运输业碳排放具有正向驱动作用,能源技术维度的影响因素则起负向驱动作用;CNN-LSTM-Attention交通运输业碳排放预测模型提升了模型在小样本数据集的预测能力,预测效果较好;低碳、基准和高碳3种情景下中国交通运输业的碳排放峰值将晚于2030年的总排放峰值目标实现;各省在碳排放峰值和达峰时间上存在异质性,应采取差异化、精准化的政策策略,局部上分区域、分梯次达峰,以整体上实现碳达峰目标。 展开更多
关键词 环境工程学 交通运输业 碳达峰 STIRPAT模型 cnn-lstm-attention模型 情景分析
原文传递
基于PID搜索优化的CNN-LSTM-Attention铝电解槽电解温度预测方法研究 被引量:6
2
作者 尹刚 朱淼 +2 位作者 全鹏程 颜玥涵 刘期烈 《仪器仪表学报》 北大核心 2025年第1期324-337,共14页
铝电解生产环境恶劣,受电场、磁场、流场、温度场等多物理场耦合影响,导致铝电解生产过程故障频发。铝电解温度是影响铝电解槽寿命和运行状态的重要参数,但由于槽内温度很高且具有强烈腐蚀性,至今尚未找到有效的电解温度在线检测与预测... 铝电解生产环境恶劣,受电场、磁场、流场、温度场等多物理场耦合影响,导致铝电解生产过程故障频发。铝电解温度是影响铝电解槽寿命和运行状态的重要参数,但由于槽内温度很高且具有强烈腐蚀性,至今尚未找到有效的电解温度在线检测与预测方法。为了解决这一技术难题,通过理论分析结合现场实验验证,揭示了铝电解槽电解温度与其工艺参数间的密切相关性,并据此提出一种基于深度学习的铝电解槽电解温度预测模型。考虑到铝电解槽工艺参数的复杂性、非线性、高维度、时序性等特征,采用卷积神经网络(CNN)用于提取数据的高维特征,长短期记忆网络用于建模(LSTM),处理铝电解生产过程中的时序数据,引入了注意力机制(Attention),学习输入参数不同部分之间的关联性,同时根据输入数据的重要程度进行加权处理,并采用PID搜索优化算法(PSA)对CNN-LSTM-Attention模型的参数进行寻优,减少训练时间并提高模型的性能。最后经铝电解实际生产数据进行现场实验验证,结果表明:提出的温度预测模型相关指数(R~2)为0.963 7,均方根误差(RMSE)和平均绝对误差(MAE)分别为5.417 6和3.382 5,与单一模型算法、其他预测算法和不同优化算法对比验证表明,该模型的性能更佳,能够准确预测铝电解槽电解温度,实现了铝电解槽电解温度的在线检测。 展开更多
关键词 铝电解 算法 电解温度 深度学习 过程控制
原文传递
基于SVR和CNN-LSTM-ATTENTION模型的粮食产量影响因素分析和组合预测
3
作者 赵垭越 樊琳琳 +2 位作者 秦政 苗敬利 吕彬 《中国粮油学报》 北大核心 2025年第9期190-198,共9页
本研究旨在提高粮食产量预测的准确性,以河北省为例,采用1990—2021年河北省的粮食产量数据,通过相关性分析、共线性分析、灰色关联度分析等方法和异常值剔除,筛选出了11个关键变量。构建了基于支持向量回归(SVR)和卷积神经网络-长短期... 本研究旨在提高粮食产量预测的准确性,以河北省为例,采用1990—2021年河北省的粮食产量数据,通过相关性分析、共线性分析、灰色关联度分析等方法和异常值剔除,筛选出了11个关键变量。构建了基于支持向量回归(SVR)和卷积神经网络-长短期记忆网络-注意力机制(CNN-LSTM-ATTENTION)的组合预测模型,以提高粮食产量预测精度。实证分析表明,组合模型有效整合了SVR处理非线性关系的能力与CNN-LSTM-ATTENTION捕捉时序特征的优势。平均绝对百分比误差(MAPE)仅为1.498%,相较于单一的SVR模型和CNN-LSTM-ATTENTION模型分别提高了17%和42%,显示出更好的泛化能力和鲁棒性。 展开更多
关键词 粮食产量 SVR cnn-lstm-attention 灰色关联
在线阅读 下载PDF
基于CNN-LSTM-Attention预测钻井摩阻的方法研究
4
作者 史钰博 李琳 《电脑与信息技术》 2025年第2期31-36,共6页
通过对大量的钻井数据分析发现,准确预测钻井摩阻已成为当前钻井工作的关键任务之一。传统的摩阻预测方法主要基于经验,具有成本高、效率低、预测值与实际值误差偏大等问题。基于此,利用已积累的钻井数据,设计并实现一种以CNN-LSTM-Atte... 通过对大量的钻井数据分析发现,准确预测钻井摩阻已成为当前钻井工作的关键任务之一。传统的摩阻预测方法主要基于经验,具有成本高、效率低、预测值与实际值误差偏大等问题。基于此,利用已积累的钻井数据,设计并实现一种以CNN-LSTM-Attention算法为核心的钻井摩阻预测模型。同时将该算法与卷积神经网络(Convolutional Neural Network,CNN)、循环神经网络(Recurrent Neural Network,RNN)、门控循环单元(Gated Recurrent Unit,GRU)等其他深度学习算法进行详细对比。实验结果表明,使用CNN-LSTM-Attention算法预测测试井的钻井摩阻误差值更低,平均绝对误差(Mean Absolute Error,MAE)为0.76,均方根误差(Root Mean Squared Error,RMSE)为0.77。 展开更多
关键词 钻井摩阻 CNN cnn-lstm-attention 深度学习
在线阅读 下载PDF
PID Steering Control Method of Agricultural Robot Based on Fusion of Particle Swarm Optimization and Genetic Algorithm
5
作者 ZHAO Longlian ZHANG Jiachuang +2 位作者 LI Mei DONG Zhicheng LI Junhui 《农业机械学报》 北大核心 2026年第1期358-367,共10页
Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion... Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion algorithm took advantage of the fast optimization ability of PSO to optimize the population screening link of GA.The Simulink simulation results showed that the convergence of the fitness function of the fusion algorithm was accelerated,the system response adjustment time was reduced,and the overshoot was almost zero.Then the algorithm was applied to the steering test of agricultural robot in various scenes.After modeling the steering system of agricultural robot,the steering test results in the unloaded suspended state showed that the PID control based on fusion algorithm reduced the rise time,response adjustment time and overshoot of the system,and improved the response speed and stability of the system,compared with the artificial trial and error PID control and the PID control based on GA.The actual road steering test results showed that the PID control response rise time based on the fusion algorithm was the shortest,about 4.43 s.When the target pulse number was set to 100,the actual mean value in the steady-state regulation stage was about 102.9,which was the closest to the target value among the three control methods,and the overshoot was reduced at the same time.The steering test results under various scene states showed that the PID control based on the proposed fusion algorithm had good anti-interference ability,it can adapt to the changes of environment and load and improve the performance of the control system.It was effective in the steering control of agricultural robot.This method can provide a reference for the precise steering control of other robots. 展开更多
关键词 agricultural robot steering PID control particle swarm optimization algorithm genetic algorithm
在线阅读 下载PDF
基于CNN-LSTM-Attention模型的新能源发电量预测与孤立森林算法的风险检测分析
6
作者 胡殿刚 马寅 +5 位作者 庞晓东 吴锋 牛甄 李灏 姬艳秋 冯文韬 《图像与信号处理》 2025年第1期45-61,共17页
文章主要探讨了一种将卷积神经网络、长短时记忆网络以及注意力机制相结合的方法在新能源发电量预测中的应用及其有效性。随着新能源发电量受外部环境影响而表现出较大波动性和复杂性,传统预测模型难以全面捕捉其中的复杂模式和长期依... 文章主要探讨了一种将卷积神经网络、长短时记忆网络以及注意力机制相结合的方法在新能源发电量预测中的应用及其有效性。随着新能源发电量受外部环境影响而表现出较大波动性和复杂性,传统预测模型难以全面捕捉其中的复杂模式和长期依赖性。因此,文章提出了一个集成多种深度学习方法的组合模型(CNN-LSTM-Attention模型),该模型首先通过卷积层提取数据中的局部特征,随后利用长短时记忆网络建模时间序列的长期依赖关系,最后通过注意力机制增强对重要信息的关注度。实验选取了2024年1月1日至6月30日期间的甘肃省风力发电和水力发电数据,进行了特征提取和模型训练。结果显示,相较于单独使用卷积神经网络或长短时记忆网络,结合了注意力机制的组合模型在多个评估指标上均表现出更高的预测准确性和更好的拟合效果。此外,还引入了孤立森林算法对预测误差进行异常值检测,并结合风险等级进行了详细的分类分析,进一步验证了该模型在实际应用中的有效性。本研究为新能源发电系统的预测和管理提供了一种新思路和方法,有助于提高系统运行的可靠性和安全性。This paper mainly discusses the application and effectiveness of a method combining convolutional neural networks, long short-term memory networks, and attention mechanisms in new energy power generation predictions. As new energy generation is affected by the external environment and shows great volatility and complexity, it is difficult to fully capture complex patterns and long-term dependencies using traditional forecasting models. Therefore, this paper proposes a combined model integrating multiple deep learning methods (CNN-LSTM-Attention model), which firstly extracts local features from the data through convolutional layers, then models the long-term dependencies of time series by long short-term memory networks and finally enhances the focus on important information through attention mechanisms. The experiment selected the wind power and hydroelectric power generation data of Gansu Province from January 1 to June 30, 2024, and carried out feature extraction and model training. The results show that compared with convolutional neural networks or long short-term memory networks alone, the combined model with attention mechanism has higher prediction accuracy and better fitting effect on multiple evaluation indicators. In addition, this paper introduces the isolation forest algorithm for outlier detection of the prediction errors and carries out a detailed classification analysis combined with the risk level, further verifying the effectiveness of the model in practical application. The research in this paper provides a new idea and method for the prediction and management of a new energy power generation system, which is helpful in improving the reliability and safety of the whole system operation. 展开更多
关键词 新能源发电预测 卷积神经网络–长短时记忆网络–注意力机制(cnn-lstm-attention) 异常值检测 孤立森林算法 风险等级分析 运维管理
在线阅读 下载PDF
Flood predictions from metrics to classes by multiple machine learning algorithms coupling with clustering-deduced membership degree
7
作者 ZHAI Xiaoyan ZHANG Yongyong +5 位作者 XIA Jun ZHANG Yongqiang TANG Qiuhong SHAO Quanxi CHEN Junxu ZHANG Fan 《Journal of Geographical Sciences》 2026年第1期149-176,共28页
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting... Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach. 展开更多
关键词 flood regime metrics class prediction machine learning algorithms hydrological model
原文传递
Equivalent Modeling with Passive Filter Parameter Clustering for Photovoltaic Power Stations Based on a Particle Swarm Optimization K-Means Algorithm
8
作者 Binjiang Hu Yihua Zhu +3 位作者 Liang Tu Zun Ma Xian Meng Kewei Xu 《Energy Engineering》 2026年第1期431-459,共29页
This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the compl... This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research. 展开更多
关键词 Photovoltaic power station multi-machine equivalentmodeling particle swarmoptimization K-means clustering algorithm
在线阅读 下载PDF
GSLDWOA: A Feature Selection Algorithm for Intrusion Detection Systems in IIoT
9
作者 Wanwei Huang Huicong Yu +3 位作者 Jiawei Ren Kun Wang Yanbu Guo Lifeng Jin 《Computers, Materials & Continua》 2026年第1期2006-2029,共24页
Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from... Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%. 展开更多
关键词 Industrial Internet of Things intrusion detection system feature selection whale optimization algorithm Gaussian mutation
在线阅读 下载PDF
Identification of small impact craters in Chang’e-4 landing areas using a new multi-scale fusion crater detection algorithm
10
作者 FangChao Liu HuiWen Liu +7 位作者 Li Zhang Jian Chen DiJun Guo Bo Li ChangQing Liu ZongCheng Ling Ying-Bo Lu JunSheng Yao 《Earth and Planetary Physics》 2026年第1期92-104,共13页
Impact craters are important for understanding the evolution of lunar geologic and surface erosion rates,among other functions.However,the morphological characteristics of these micro impact craters are not obvious an... Impact craters are important for understanding the evolution of lunar geologic and surface erosion rates,among other functions.However,the morphological characteristics of these micro impact craters are not obvious and they are numerous,resulting in low detection accuracy by deep learning models.Therefore,we proposed a new multi-scale fusion crater detection algorithm(MSF-CDA)based on the YOLO11 to improve the accuracy of lunar impact crater detection,especially for small craters with a diameter of<1 km.Using the images taken by the LROC(Lunar Reconnaissance Orbiter Camera)at the Chang’e-4(CE-4)landing area,we constructed three separate datasets for craters with diameters of 0-70 m,70-140 m,and>140 m.We then trained three submodels separately with these three datasets.Additionally,we designed a slicing-amplifying-slicing strategy to enhance the ability to extract features from small craters.To handle redundant predictions,we proposed a new Non-Maximum Suppression with Area Filtering method to fuse the results in overlapping targets within the multi-scale submodels.Finally,our new MSF-CDA method achieved high detection performance,with the Precision,Recall,and F1 score having values of 0.991,0.987,and 0.989,respectively,perfectly addressing the problems induced by the lesser features and sample imbalance of small craters.Our MSF-CDA can provide strong data support for more in-depth study of the geological evolution of the lunar surface and finer geological age estimations.This strategy can also be used to detect other small objects with lesser features and sample imbalance problems.We detected approximately 500,000 impact craters in an area of approximately 214 km2 around the CE-4 landing area.By statistically analyzing the new data,we updated the distribution function of the number and diameter of impact craters.Finally,we identified the most suitable lighting conditions for detecting impact crater targets by analyzing the effect of different lighting conditions on the detection accuracy. 展开更多
关键词 impact craters Chang’e-4 landing area multi-scale automatic detection YOLO11 Fusion algorithm
在线阅读 下载PDF
基于CNN-LSTM-attention模型航迹预测研究 被引量:11
11
作者 孔建国 李亚彬 +2 位作者 张时雨 陈超 梁海军 《航空计算技术》 2023年第1期1-5,共5页
以航迹预测方法作为切入点,重庆-广州航路航空器记录的ADS-B数据作为研究内容,提出了一种融合注意力机制的长时序航迹预测方法(CNN-LSTM-attention)。研究运用一维卷积神经网络对航迹数据多维特征进行提取,并将经纬度、高度、速度、航... 以航迹预测方法作为切入点,重庆-广州航路航空器记录的ADS-B数据作为研究内容,提出了一种融合注意力机制的长时序航迹预测方法(CNN-LSTM-attention)。研究运用一维卷积神经网络对航迹数据多维特征进行提取,并将经纬度、高度、速度、航向等的多维特征向量构造成时序形式作为LSTM网络输入,通过赋予LSTM网络隐含层的权重占比并区别不同时序点隐藏层信息对未来航迹预测的影响程度来达到优化预测模型的作用。构建好的CNN-LSTM-attention模型采用Adam优化算法进行训练,LSTM和CNN-LSTM作为实验对比模型,将决定系数R^(2)作为模型评价标准来衡量航迹预测模型的准确性。实验结果表明加入注意力机制的神经网络预测模型CNN+LSTM+attention(卷积神经网络-长短期记忆网络-注意力机制)的方法相较于其他两种,其预测精确性更高。 展开更多
关键词 航迹预测 cnn-lstm-attention模型 注意力机制 ADS-B航迹数据 神经网络
在线阅读 下载PDF
基于CNN-LSTM-Attention模型的沁河流域径流模拟及未来多情景预测 被引量:5
12
作者 张书齐 左其亭 +2 位作者 臧超 张乐开 巴音吉 《水资源与水工程学报》 CSCD 北大核心 2024年第5期73-81,共9页
为提升深度学习模型对变化环境下流域的径流模拟精度,以沁河流域为例,构建了基于卷积神经网络(CNN)、长短期记忆网络(LSTM)和注意力机制(Attention)的CNN-LSTM-Attention耦合模型,加入多种优化算法,结合第六次国际耦合模式比较计划CMIP... 为提升深度学习模型对变化环境下流域的径流模拟精度,以沁河流域为例,构建了基于卷积神经网络(CNN)、长短期记忆网络(LSTM)和注意力机制(Attention)的CNN-LSTM-Attention耦合模型,加入多种优化算法,结合第六次国际耦合模式比较计划CMIP6中的BCC-CSM2-MR气候模式并考虑多种情景,应用于流域的径流模拟和预测,同时比较了多种深度学习模型的模拟精度。结果表明:CNN-LSTM-Attention模型在沁河流域表现出了较好的径流模拟效果,模拟精度均优于其他深度学习模型,纳什效率系数(NSE)为0.883,均方根误差(RMSE)为2.317,平均绝对误差(MAE)为1.098;不同气候变化情景下,沁河流域在2025—2050年的年径流量均呈现缓慢衰减趋势且波动程度较大,尤其在SSP1-2.6情景下,径流量衰减和波动程度突出。研究可为深度学习模型在人水关系智能化计算模拟领域的应用提供新思路,并为流域后续的水资源开发利用和管理提供科学参考价值。 展开更多
关键词 径流模拟及预测 深度学习模型 cnn-lstm-attention 气候变化 沁河流域
在线阅读 下载PDF
Enhancing photovoltaic power prediction using a CNN-LSTM-attention hybrid model with Bayesian hyperparameter optimization 被引量:3
13
作者 Ning Zhou Bowen Shang +2 位作者 Mingming Xu Lei Peng Yafei Zhang 《Global Energy Interconnection》 EI CSCD 2024年第5期667-681,共15页
Improving the accuracy of solar power forecasting is crucial to ensure grid stability,optimize solar power plant operations,and enhance grid dispatch efficiency.Although hybrid neural network models can effectively ad... Improving the accuracy of solar power forecasting is crucial to ensure grid stability,optimize solar power plant operations,and enhance grid dispatch efficiency.Although hybrid neural network models can effectively address the complexities of environmental data and power prediction uncertainties,challenges such as labor-intensive parameter adjustments and complex optimization processes persist.Thus,this study proposed a novel approach for solar power prediction using a hybrid model(CNN-LSTM-attention)that combines a convolutional neural network(CNN),long short-term memory(LSTM),and attention mechanisms.The model incorporates Bayesian optimization to refine the parameters and enhance the prediction accuracy.To prepare high-quality training data,the solar power data were first preprocessed,including feature selection,data cleaning,imputation,and smoothing.The processed data were then used to train a hybrid model based on the CNN-LSTM-attention architecture,followed by hyperparameter optimization employing Bayesian methods.The experimental results indicated that within acceptable model training times,the CNN-LSTM-attention model outperformed the LSTM,GRU,CNN-LSTM,CNN-LSTM with autoencoders,and parallel CNN-LSTM attention models.Furthermore,following Bayesian optimization,the optimized model demonstrated significantly reduced prediction errors during periods of data volatility compared to the original model,as evidenced by MRE evaluations.This highlights the clear advantage of the optimized model in forecasting fluctuating data. 展开更多
关键词 Photovoltaic power prediction cnn-lstm-attention Bayesian optimization
在线阅读 下载PDF
基于CNN-LSTM-Attention神经网络的高压电缆局部放电预测方法研究 被引量:2
14
作者 李彬 邓力凡 彭丽 《湖南城市学院学报(自然科学版)》 CAS 2023年第4期67-72,共6页
为了更迅速、准确地识别出高压电缆的局部放电故障,本文提出一种基于卷积长短期注意力(convolutional long short-term attention,CNN-LSTM-Attention)神经网络的高压电缆局部放电预测方法.首先,对高压电缆局部放电信号进行实时监测,并... 为了更迅速、准确地识别出高压电缆的局部放电故障,本文提出一种基于卷积长短期注意力(convolutional long short-term attention,CNN-LSTM-Attention)神经网络的高压电缆局部放电预测方法.首先,对高压电缆局部放电信号进行实时监测,并用小波分析将其离散,把长信号切分成多段信号且提取每段信号的统计特征量;其次,根据特征量构建神经网络分类模型,其由能够提取轮廓特征的卷积层、提取信号时序特征的长短期记忆层以及具有时序重要部分捕捉能力的注意力层构成;最后,通过实际数据进行仿真.结果表明:所提方法能准确识别较高采样率的异常放电信号,且相比传统神经网络,CNN-LSTM-Attention神经网络的故障识别准确率有明显提高,其马修斯相关系数(Matthews correlation coefficient,MCC)为0.87103. 展开更多
关键词 高压电缆 局部放电 cnn-lstm-attention 故障识别
在线阅读 下载PDF
基于特高频和CNN-LSTM-Attention算法的高压电缆故障诊断方法 被引量:5
15
作者 胡裕峰 张自远 +2 位作者 金涛 盛敏超 李中龙 《计算技术与自动化》 2023年第2期31-38,共8页
随着高压电缆的加速发展和老化,由局部放电(partial discharge,PD)引起的故障问题亟须解决。为此,提出了一种基于特高频(UHF)局放技术与CNN-LSTM-Attention算法的高压电缆故障在线智能诊断方法。首先,对高压电缆的PD产生机理,以及UHF局... 随着高压电缆的加速发展和老化,由局部放电(partial discharge,PD)引起的故障问题亟须解决。为此,提出了一种基于特高频(UHF)局放技术与CNN-LSTM-Attention算法的高压电缆故障在线智能诊断方法。首先,对高压电缆的PD产生机理,以及UHF局放技术的实现过程进行描述。其次,利用巴特沃斯(Butterworth)对PD信号进行高通滤波,采用小波变换对信号进行去噪,IPLR算法对PD信号进行降维处理,进而实现特征量的准确提取。最后,建立由CNN-LSTM-Attention算法构成的智能诊断模型。模型中卷积层(CNN)提取轮廓特征,长短期记忆层(LSTM)提取信号时序特征,注意力层(Attention)学习信号重要时序部分。通过实际数据仿真表明:相比传统神经网络方法,CNN-LSTM-Attention神经网络检测方法能够准确识别高采样率的异常放电信号特征,且故障识别准确率明显提高。 展开更多
关键词 特高频 PD cnn-lstm-attention 高压电缆 故障诊断
在线阅读 下载PDF
Method for Estimating the State of Health of Lithium-ion Batteries Based on Differential Thermal Voltammetry and Sparrow Search Algorithm-Elman Neural Network 被引量:1
16
作者 Yu Zhang Daoyu Zhang TiezhouWu 《Energy Engineering》 EI 2025年第1期203-220,共18页
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr... Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%. 展开更多
关键词 Lithium-ion battery state of health differential thermal voltammetry Sparrow Search algorithm
在线阅读 下载PDF
Robustness Optimization Algorithm with Multi-Granularity Integration for Scale-Free Networks Against Malicious Attacks 被引量:1
17
作者 ZHANG Yiheng LI Jinhai 《昆明理工大学学报(自然科学版)》 北大核心 2025年第1期54-71,共18页
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently... Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms. 展开更多
关键词 complex network model MULTI-GRANULARITY scale-free networks ROBUSTNESS algorithm integration
原文传递
Short-TermWind Power Forecast Based on STL-IAOA-iTransformer Algorithm:A Case Study in Northwest China 被引量:2
18
作者 Zhaowei Yang Bo Yang +5 位作者 Wenqi Liu Miwei Li Jiarong Wang Lin Jiang Yiyan Sang Zhenning Pan 《Energy Engineering》 2025年第2期405-430,共26页
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th... Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy. 展开更多
关键词 Short-termwind power forecast improved arithmetic optimization algorithm iTransformer algorithm SimuNPS
在线阅读 下载PDF
A LODBO algorithm for multi-UAV search and rescue path planning in disaster areas 被引量:1
19
作者 Liman Yang Xiangyu Zhang +2 位作者 Zhiping Li Lei Li Yan Shi 《Chinese Journal of Aeronautics》 2025年第2期200-213,共14页
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms... In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set. 展开更多
关键词 Unmanned aerial vehicle Path planning Meta heuristic algorithm DBO algorithm NP-hard problems
原文传递
Bearing capacity prediction of open caissons in two-layered clays using five tree-based machine learning algorithms 被引量:2
20
作者 Rungroad Suppakul Kongtawan Sangjinda +3 位作者 Wittaya Jitchaijaroen Natakorn Phuksuksakul Suraparb Keawsawasvong Peem Nuaklong 《Intelligent Geoengineering》 2025年第2期55-65,共11页
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so... Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design. 展开更多
关键词 Two-layered clay Open caisson Tree-based algorithms FELA Machine learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部