期刊文献+
共找到78篇文章
< 1 2 4 >
每页显示 20 50 100
Priority-Based Scheduling and Orchestration in Edge-Cloud Computing:A Deep Reinforcement Learning-Enhanced Concurrency Control Approach
1
作者 Mohammad A Al Khaldy Ahmad Nabot +4 位作者 Ahmad Al-Qerem Mohammad Alauthman Amina Salhi Suhaila Abuowaida Naceur Chihaoui 《Computer Modeling in Engineering & Sciences》 2025年第10期673-697,共25页
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ... The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees. 展开更多
关键词 Edge computing cloud computing scheduling algorithms orchestration strategies deep reinforcement learning concurrency control real-time systems IoT
在线阅读 下载PDF
Optimized Phishing Detection with Recurrent Neural Network and Whale Optimizer Algorithm
2
作者 Brij Bhooshan Gupta Akshat Gaurav +3 位作者 Razaz Waheeb Attar Varsha Arya Ahmed Alhomoud Kwok Tai Chui 《Computers, Materials & Continua》 SCIE EI 2024年第9期4895-4916,共22页
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec... Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection. 展开更多
关键词 Phishing detection recurrent Neural Network(RNN) Whale Optimization algorithm(WOA) CYBERSECURITY machine learning optimization
在线阅读 下载PDF
Evolutionary-assisted reinforcement learning for reservoir real-time production optimization under uncertainty 被引量:2
3
作者 Zhong-Zheng Wang Kai Zhang +6 位作者 Guo-Dong Chen Jin-Ding Zhang Wen-Dong Wang Hao-Chen Wang Li-Ming Zhang Xia Yan Jun Yao 《Petroleum Science》 SCIE EI CAS CSCD 2023年第1期261-276,共16页
Production optimization has gained increasing attention from the smart oilfield community because it can increase economic benefits and oil recovery substantially.While existing methods could produce high-optimality r... Production optimization has gained increasing attention from the smart oilfield community because it can increase economic benefits and oil recovery substantially.While existing methods could produce high-optimality results,they cannot be applied to real-time optimization for large-scale reservoirs due to high computational demands.In addition,most methods generally assume that the reservoir model is deterministic and ignore the uncertainty of the subsurface environment,making the obtained scheme unreliable for practical deployment.In this work,an efficient and robust method,namely evolutionaryassisted reinforcement learning(EARL),is proposed to achieve real-time production optimization under uncertainty.Specifically,the production optimization problem is modeled as a Markov decision process in which a reinforcement learning agent interacts with the reservoir simulator to train a control policy that maximizes the specified goals.To deal with the problems of brittle convergence properties and lack of efficient exploration strategies of reinforcement learning approaches,a population-based evolutionary algorithm is introduced to assist the training of agents,which provides diverse exploration experiences and promotes stability and robustness due to its inherent redundancy.Compared with prior methods that only optimize a solution for a particular scenario,the proposed approach trains a policy that can adapt to uncertain environments and make real-time decisions to cope with unknown changes.The trained policy,represented by a deep convolutional neural network,can adaptively adjust the well controls based on different reservoir states.Simulation results on two reservoir models show that the proposed approach not only outperforms the RL and EA methods in terms of optimization efficiency but also has strong robustness and real-time decision capacity. 展开更多
关键词 Production optimization Deep reinforcement learning Evolutionary algorithm real-time optimization Optimization under uncertainty
原文传递
A dynamic fusion path planning algorithm for mobile robots incorporating improved IB-RRT∗and deep reinforcement learning 被引量:1
4
作者 刘安东 ZHANG Baixin +2 位作者 CUI Qi ZHANG Dan NI Hongjie 《High Technology Letters》 EI CAS 2023年第4期365-376,共12页
Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path pl... Dynamic path planning is crucial for mobile robots to navigate successfully in unstructured envi-ronments.To achieve globally optimal path and real-time dynamic obstacle avoidance during the movement,a dynamic path planning algorithm incorporating improved IB-RRT∗and deep reinforce-ment learning(DRL)is proposed.Firstly,an improved IB-RRT∗algorithm is proposed for global path planning by combining double elliptic subset sampling and probabilistic central circle target bi-as.Then,to tackle the slow response to dynamic obstacles and inadequate obstacle avoidance of tra-ditional local path planning algorithms,deep reinforcement learning is utilized to predict the move-ment trend of dynamic obstacles,leading to a dynamic fusion path planning.Finally,the simulation and experiment results demonstrate that the proposed improved IB-RRT∗algorithm has higher con-vergence speed and search efficiency compared with traditional Bi-RRT∗,Informed-RRT∗,and IB-RRT∗algorithms.Furthermore,the proposed fusion algorithm can effectively perform real-time obsta-cle avoidance and navigation tasks for mobile robots in unstructured environments. 展开更多
关键词 mobile robot improved IB-RRT∗algorithm deep reinforcement learning(DRL) real-time dynamic obstacle avoidance
在线阅读 下载PDF
Hybrid Deep Learning-Improved BAT Optimization Algorithm for Soil Classification Using Hyperspectral Features
5
作者 S.Prasanna Bharathi S.Srinivasan +1 位作者 G.Chamundeeswari B.Ramesh 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期579-594,共16页
Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids ... Now a days,Remote Sensing(RS)techniques are used for earth observation and for detection of soil types with high accuracy and better reliability.This technique provides perspective view of spatial resolution and aids in instantaneous measurement of soil’s minerals and its characteristics.There are a few challenges that is present in soil classification using image enhancement such as,locating and plotting soil boundaries,slopes,hazardous areas,drainage condition,land use,vegetation etc.There are some traditional approaches which involves few drawbacks such as,manual involvement which results in inaccuracy due to human interference,time consuming,inconsistent prediction etc.To overcome these draw backs and to improve the predictive analysis of soil characteristics,we propose a Hybrid Deep Learning improved BAT optimization algorithm(HDIB)for soil classification using remote sensing hyperspectral features.In HDIB,we propose a spontaneous BAT optimization algorithm for feature extraction of both spectral-spatial features by choosing pure pixels from the Hyper Spectral(HS)image.Spectral-spatial vector as training illustrations is attained by merging spatial and spectral vector by means of priority stacking methodology.Then,a recurring Deep Learning(DL)Neural Network(NN)is used for classifying the HS images,considering the datasets of Pavia University,Salinas and Tamil Nadu Hill Scene,which in turn improves the reliability of classification.Finally,the performance of the proposed HDIB based soil classifier is compared and analyzed with existing methodologies like Single Layer Perceptron(SLP),Convolutional Neural Networks(CNN)and Deep Metric Learning(DML)and it shows an improved classification accuracy of 99.87%,98.34%and 99.9%for Tamil Nadu Hills dataset,Pavia University and Salinas scene datasets respectively. 展开更多
关键词 HDIB bat optimization algorithm recurrent deep learning neural network convolutional neural network single layer perceptron hyperspectral images deep metric learning
在线阅读 下载PDF
Ship motion extreme short time prediction of ship pitch based on diagonal recurrent neural network 被引量:3
6
作者 SHEN Yan XIE Mei-ping 《Journal of Marine Science and Application》 2005年第2期56-60,共5页
A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The prin... A DRNN (diagonal recurrent neural network) and its RPE (recurrent prediction error) learning algorithm are proposed in this paper .Using of the simple structure of DRNN can reduce the capacity of calculation. The principle of RPE learning algorithm is to adjust weights along the direction of Gauss-Newton. Meanwhile, it is unnecessary to calculate the second local derivative and the inverse matrixes, whose unbiasedness is proved. With application to the extremely short time prediction of large ship pitch, satisfactory results are obtained. Prediction effect of this algorithm is compared with that of auto-regression and periodical diagram method, and comparison results show that the proposed algorithm is feasible. 展开更多
关键词 extreme short time prediction diagonal recursive neural network recurrent prediction error learning algorithm UNBIASEDNESS
在线阅读 下载PDF
Enhanced Marathi Speech Recognition Facilitated by Grasshopper Optimisation-Based Recurrent Neural Network
7
作者 Ravindra Parshuram Bachate Ashok Sharma +3 位作者 Amar Singh Ayman AAly Abdulaziz HAlghtani Dac-Nhuong Le 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期439-454,共16页
Communication is a significant part of being human and living in the world.Diverse kinds of languages and their variations are there;thus,one person can speak any language and cannot effectively communicate with one w... Communication is a significant part of being human and living in the world.Diverse kinds of languages and their variations are there;thus,one person can speak any language and cannot effectively communicate with one who speaks that language in a different accent.Numerous application fields such as education,mobility,smart systems,security,and health care systems utilize the speech or voice recognition models abundantly.Though,various studies are focused on the Arabic or Asian and English languages by ignoring other significant languages like Marathi that leads to the broader research motivations in regional languages.It is necessary to understand the speech recognition field,in which the major concentrated stages are feature extraction and classification.This paper emphasis developing a Speech Recognition model for the Marathi language by optimizing Recurrent Neural Network(RNN).Here,the preprocessing of the input signal is performed by smoothing and median filtering.After preprocessing the feature extraction is carried out using MFCC and Spectral features to get precise features from the input Marathi Speech corpus.The optimized RNN classifier is used for speech recognition after completing the feature extraction task,where the optimization of hidden neurons in RNN is performed by the Grasshopper Optimization Algorithm(GOA).Finally,the comparison with the conventional techniques has shown that the proposed model outperforms most competing models on a benchmark dataset. 展开更多
关键词 Deep learning grasshopper optimization algorithm recurrent neural network speech recognition word error rate
在线阅读 下载PDF
A novel compensation-based recurrent fuzzy neural network and its learning algorithm 被引量:6
8
作者 WU Bo WU Ke LU JianHong 《Science in China(Series F)》 2009年第1期41-51,共11页
Based on detailed study on several kinds of fuzzy neural networks, we propose a novel compensationbased recurrent fuzzy neural network (CRFNN) by adding recurrent element and compensatory element to the conventional... Based on detailed study on several kinds of fuzzy neural networks, we propose a novel compensationbased recurrent fuzzy neural network (CRFNN) by adding recurrent element and compensatory element to the conventional fuzzy neural network. Then, we propose a sequential learning method for the structure identification of the CRFNN in order to confirm the fuzzy rules and their correlative parameters effectively. Furthermore, we improve the BP algorithm based on the characteristics of the proposed CRFNN to train the network. By modeling the typical nonlinear systems, we draw the conclusion that the proposed CRFNN has excellent dynamic response and strong learning ability. 展开更多
关键词 compensation-based recurrent fuzzy neural network sequential learning method improved BP algorithm nonlinear system
原文传递
基于飞行轨迹预测的数据异常检测与清洗方法
9
作者 赵元棣 胡译心 +1 位作者 汤盛家 李桃 《科学技术与工程》 北大核心 2025年第26期11398-11406,共9页
为了有效检测和清洗飞行轨迹异常数据,研究了在数据采集过程中容易出现的重复、缺失、错误等问题,基于机器学习提出了一种准确性和鲁棒性较高的数据异常检测与清洗方法。首先,结合线性插值和最大最小值算法对数据进行去重、插值和归一... 为了有效检测和清洗飞行轨迹异常数据,研究了在数据采集过程中容易出现的重复、缺失、错误等问题,基于机器学习提出了一种准确性和鲁棒性较高的数据异常检测与清洗方法。首先,结合线性插值和最大最小值算法对数据进行去重、插值和归一化处理;其次,基于GRU构建飞行轨迹预测模型;最后,利用SVDD模型对飞行轨迹数据进行异常检测,当实际数据与预测数据偏差较大时使用预测数据进行替换,达到清洗效果。结果表明:相较于其他LSTM算法模型,本文方法得到清洗后的飞行轨迹数据具有更高的准确性,F_(1)分数平均能达到0.932,较好地逼近原始飞行轨迹数据;通过检验位置偏移(干扰)、高度偏差(篡改)、航路交叉(替换)3种攻击方法,证明该方法具有较高的鲁棒性。本文方法能够准确预测飞行轨迹,并对异常数据进行有效检测与清洗,提高数据质量,有助于准确分析航班运行情况。 展开更多
关键词 数据清洗 飞行轨迹预测 机器学习 循环门单元算法
在线阅读 下载PDF
基于HBA-GRU的水电站大坝变形监控模型研究
10
作者 黄勇 刘昱玚 +3 位作者 宋璇 宋锦焘 朱海晨 张盛飞 《电网与清洁能源》 北大核心 2025年第9期95-100,共6页
大坝是水电站核心的挡水建筑物,大坝变形规律的精准监控是保障水电站安全的重要手段。针对大坝变形非线性强的特点以及监控模型参数影响的问题,融合先进深度学习和仿生优化算法,利用蜜獾优化算法(honey badger optimization algorithm,H... 大坝是水电站核心的挡水建筑物,大坝变形规律的精准监控是保障水电站安全的重要手段。针对大坝变形非线性强的特点以及监控模型参数影响的问题,融合先进深度学习和仿生优化算法,利用蜜獾优化算法(honey badger optimization algorithm,HBA)对深度学习门控制循环单元(gated recurrent unit,GRU)模型的超参数进行优化,建立HBA-GRU组合模型应用于水电站大坝变形监控预测。通过某水电站面板堆石坝变形监测数据实证结果显示,提出的组合模型在保持较高预测准确性的同时展现出良好的泛化性能,可为同类型水电站工程安全监控模型的构建提供有效技术支撑。 展开更多
关键词 水电站大坝 安全监控 变形预测 深度学习 门控制循环单元 蜜獾优化算法
在线阅读 下载PDF
基于IKOA优化SAGRU的大坝变形预测模型
11
作者 胡伟泊 赵二峰 +1 位作者 胡灵芝 黎祎 《人民长江》 北大核心 2025年第6期222-228,共7页
为充分发掘大坝变形监测数据中的有效信息并提升监控模型的预测精度,提出了基于IKOA优化SAGRU的大坝变形预测模型。首先,在门控循环单元(GRU)中引入自注意力机制,通过计算时间维度特征的贡献率,有效捕捉实测数据中的关键时序特征,提升... 为充分发掘大坝变形监测数据中的有效信息并提升监控模型的预测精度,提出了基于IKOA优化SAGRU的大坝变形预测模型。首先,在门控循环单元(GRU)中引入自注意力机制,通过计算时间维度特征的贡献率,有效捕捉实测数据中的关键时序特征,提升模型对关键信息的敏感性;然后通过混沌映射初始化、Runge-Kutta位置更新和ESQ强化3种策略对开普勒优化算法(KOA)进行改进,以对耦合自注意力机制的门控循环单元(SAGRU)中的超参数进行自动寻优。应用实例表明:改进开普勒优化算法(IKOA)在寻优速度和精度方面均优于麻雀搜索算法、灰狼优化算法、北方苍鹰优化算法和传统KOA,模型的RMSE相比GRU、LSTM和XGBoost模型分别降低了48.45%,54.56%和58.14%,尤其在实测位移变化的关键拐点和峰值处,优化后的模型展现了更好的拟合效果,表明该模型能够全面挖掘大坝变形序列中的时序特征,解决了GRU记忆容量有限,以及传统优化算法收敛速度慢且易陷入局部最优解的问题,显著提高了大坝变形预测模型的准确性。 展开更多
关键词 大坝变形监测 门控循环单元(GRU) 改进开普勒优化算法(IKOA) 自注意力机制 深度学习 小湾双曲拱坝
在线阅读 下载PDF
基于LGWO-XGBoost-LightGBM-GRU的短期电力负荷预测算法 被引量:2
12
作者 王海文 谭爱国 +4 位作者 彭赛 黄佳欣怡 田相鹏 廖红华 柳俊 《湖北民族大学学报(自然科学版)》 2025年第1期73-79,共7页
针对历史负荷特征提取困难所导致的短期电力负荷预测精度不高的问题,提出了基于堆叠泛化集成思想的逻辑斯谛灰狼优化-极限梯度提升-轻量级梯度提升机-门控循环单元(logistic grey wolf optimizer-extreme gradient boosting-light gradi... 针对历史负荷特征提取困难所导致的短期电力负荷预测精度不高的问题,提出了基于堆叠泛化集成思想的逻辑斯谛灰狼优化-极限梯度提升-轻量级梯度提升机-门控循环单元(logistic grey wolf optimizer-extreme gradient boosting-light gradient boosting machine-gated recurrent unit, LGWO-XGBoost-LightGBM-GRU)的短期电力负荷预测算法。该算法首先使用逻辑斯谛映射对灰狼优化(grey wolf optimizer, GWO)算法进行改进得到LGWO算法,接着使用LGWO算法分别对XGBoost、LightGBM、GRU算法进行参数寻优,然后使用XGBoost、LightGBM算法对数据的不同特征进行初步提炼,最后将提炼的特征合并到历史负荷数据集中作为输入,并使用GRU进行最终的负荷预测,得到预测结果。以某工业园区的负荷预测为例进行验证,结果表明,该算法与最小二乘支持向量机(least squares support vector machines, LS-SVM)算法相比,均方根误差降低了68.85%,平均绝对误差降低了69.57%,平均绝对百分比误差降低了69.97%,决定系数提高了8.42%。该算法提高了短期电力负荷预测的精度。 展开更多
关键词 短期负荷预测 集成学习 灰狼算法 极限梯度提升 轻量级梯度提升机 门控循环单元
在线阅读 下载PDF
Chinese Language Model Adaptive Method Based on Recurrent Neural Network
13
作者 Jiangjiang Li Jiaxiang Wang +1 位作者 Lijuan Feng Yachao Zhang 《IJLAI Transactions on Science and Engineering》 2025年第1期29-34,共6页
Deep learning is more and more widely used in natural language processing.Compared with the traditional n-gram statistical language model,Recurrent neural network(RNN)modeling technology has shown great advantages in ... Deep learning is more and more widely used in natural language processing.Compared with the traditional n-gram statistical language model,Recurrent neural network(RNN)modeling technology has shown great advantages in language modeling,and has been gradually applied in speech recognition,machine translation and other fields.However,at present,the training of RNN language models is mostly offline.For different speech recognition tasks,there are language differences between training corpus and recognition tasks,which affects the recognition rate of speech recognition systems.While using RNN modeling technology to train the Chinese language model,an online RNN model self-adaption algorithm is proposed,which takes the preliminary recognition results of speech signals as corpus to continue training the model,so that the adaptive RNN model can get the maximum match with the recognition task.The experimental results show that the adaptive model effectively reduces the language difference between the language model and the recognition task,and the recognition rate of the system is further improved after the Chinese word confusion network is re-scored,which has been verified in the actual Chinese speech recognition system. 展开更多
关键词 Deep learning recurrent neural network Chinese language model Self-adaption algorithm
在线阅读 下载PDF
基于区块链技术的智能能源网络安全通信与入侵检测方法研究
14
作者 张宗包 郝蛟 谢天 《中国测试》 北大核心 2025年第S1期289-295,共7页
智能电网是现代电力系统的重要组成部分,其复杂性和互联性带来数据传输和存储的安全挑战。传统的安全措施难以应对日益增长的网络威胁,因此,该文提出一种基于区块链和深度学习的智能能源网络安全通信与入侵检测方法。应用区块链技术提... 智能电网是现代电力系统的重要组成部分,其复杂性和互联性带来数据传输和存储的安全挑战。传统的安全措施难以应对日益增长的网络威胁,因此,该文提出一种基于区块链和深度学习的智能能源网络安全通信与入侵检测方法。应用区块链技术提高数据交易的透明度和不可篡改性,构建数字孪生驱动的网络模型。通过应用深度学习算法于入侵检测,提出一种基于自注意机制的双向门控循环单元检测方法,并通过混淆矩阵和接收者操作特性曲线对其性能进行评估。实验结果表明,双向门控循环单元模型在入侵检测任务中达到99.73%的准确率、97.3%的精确度、97.95%的检测率。与传统的长短期记忆网络和门控循环单元相比,双向门控循环单元在各项性能指标上均表现出显著优势。 展开更多
关键词 智能能源网络 网络安全通信 区块链 深度学习算法 双向门控循环单元
在线阅读 下载PDF
基于超声图像的深度学习算法预测不明原因复发性流产风险的价值分析
15
作者 王小亚 《中国社区医师》 2025年第4期78-80,共3页
目的:分析基于超声图像的深度学习算法预测不明原因复发性流产(URPL)风险的价值。方法:选取2021年1月—2023年12月陇南市礼县第一人民医院收治的患者192例作为研究组,选取同期无早孕流产史的孕妇215例作为对照组。在黄体中期采集患者子... 目的:分析基于超声图像的深度学习算法预测不明原因复发性流产(URPL)风险的价值。方法:选取2021年1月—2023年12月陇南市礼县第一人民医院收治的患者192例作为研究组,选取同期无早孕流产史的孕妇215例作为对照组。在黄体中期采集患者子宫内膜超声图像,收集临床数据,建立ResNet-50模型。统计ResNet-50模型训练结果,分析模型预测性能。结果:两组年龄、螺旋动脉的搏动指数、螺旋动脉的阻力指数、子宫动脉的搏动指数、子宫动脉的阻力指数、子宫内膜厚度、促卵泡生成激素、促黄体生成素、雌二醇、抗米勒管激素比较,无统计学差异(P>0.05)。ResNet-50模型训练集准确值、验证集准确值在初始阶段较低,经过几轮训练后迅速提高并趋于稳定,接近1.0;训练集损失值初始阶段较高,经过几轮训练后逐步下降,并在第4轮后接近于0;验证集损失值也随训练轮次持续降低,最终接近训练集水平。受试者工作特征曲线下面积为0.889,校准曲线反映预测概率与实际观察概率高度吻合,模型的决策曲线表现出显著的净获益,模型准确性和精确度均较高,且Brier评分接近于0。结论:基于超声图像的深度学习算法预测uRPL风险的价值较高,表现出较高的区分能力、准确性和精确度,可为临床决策提供客观依据。 展开更多
关键词 不明原因复发性流产 超声图像 深度学习算法
暂未订购
基于RQA与DAGSVM的电能质量扰动识别方法
16
作者 陈武 钟建伟 +1 位作者 杨永超 梁会军 《计算机仿真》 2025年第1期52-56,共5页
针对电能质量扰动(power quality disturbance, PQD)随机多变导致的特征交叉及分类性能不足的问题,提出了一种递归定量分析(recurrence quantification analysis, RQA)与有向无环图支持向量机(directed acyclic graph support vector ma... 针对电能质量扰动(power quality disturbance, PQD)随机多变导致的特征交叉及分类性能不足的问题,提出了一种递归定量分析(recurrence quantification analysis, RQA)与有向无环图支持向量机(directed acyclic graph support vector machine, DAGSVM)的PQD分类新方法。首先利用基于复杂网络理论的递归定量分析法定量刻画电能质量扰动的递归图,并构造特征矩阵;其次通过DAGSVM构建PQD分类模型;最后采用基于发现学习的教与学优化算法优化PQD分类器的惩罚系数和核函数参数从而提升PQD分类器性能。结果表明,上述方法对PQD具有较高的识别准确率和良好的抗噪性。 展开更多
关键词 电能质量扰动信号 分类 教与学优化算法 递归定量分析
在线阅读 下载PDF
基于深度学习的医院财务数据智能化分析算法设计
17
作者 李蒙 王贇 《电子设计工程》 2025年第19期178-182,共5页
为提升医院财务数据分析的精度与效率,提出一种基于深度学习的智能分析算法。利用双向门控循环单元网络提取财务数据的时序特征,引入注意力机制以增强对关键信息的关注能力。为提升模型性能,采用改进的黏菌算法对模型超参数进行优化,构... 为提升医院财务数据分析的精度与效率,提出一种基于深度学习的智能分析算法。利用双向门控循环单元网络提取财务数据的时序特征,引入注意力机制以增强对关键信息的关注能力。为提升模型性能,采用改进的黏菌算法对模型超参数进行优化,构建融合结构的深度预测算法。实验表明,所提算法在医院财务预测中表现优异,MAE为0.17,MSE为0.04,RMSE为0.20,MAPE为2.89%,R2达0.95,预测精度显著优于对比模型。同时,算法在第70次迭代时收敛,RMSE达到0.22,相较于对比模型,其收敛速度更快、训练效率更高。该算法兼具精度与效率,为医院财务管理提供决策支持,具有良好的实际应用价值。 展开更多
关键词 深度学习 医院财务数据 双向门控循环单元网络 注意力机制 改进黏菌算法
在线阅读 下载PDF
基于JEC-FDTD等效循环神经网络的电磁建模和等离子体参数反演 被引量:2
18
作者 覃一澜 马嘉禹 +1 位作者 付海洋 徐丰 《电波科学学报》 CSCD 北大核心 2024年第3期552-560,共9页
磁化等离子体中的电磁波传播是重要的研究课题,针对特定场景下的电磁等离子耦合问题,进行有效且准确的方程建模与参数求解具有极强的研究价值和挑战性,这是探究电磁波与等离子体复杂非线性相互作用机制的关键。文中设计了一种可用于电... 磁化等离子体中的电磁波传播是重要的研究课题,针对特定场景下的电磁等离子耦合问题,进行有效且准确的方程建模与参数求解具有极强的研究价值和挑战性,这是探究电磁波与等离子体复杂非线性相互作用机制的关键。文中设计了一种可用于电磁等离子体正逆向建模的循环神经网络(recurrent neural network,RNN),该网络正向传播过程等价于任意磁倾角情况下的电流密度卷积时域有限差分(current density convolution finite-difference time-domain,JEC-FDTD)方法,因此可以求解给定的电磁建模问题,并易于大规模并行计算。通过构建前向可微模拟过程,JEC-FDTD方法可以使用自动微分技术准确且高效地计算梯度,然后通过训练网络来解决反问题。因此,该方法可以有效利用观测到的时域散射场信号反演重要的等离子体参数。JEC-FDTD方法和RNN相结合,形成了较强的协同效应,使得模型具有可解释性和高效的计算效率,受益于深度学习提供的优化策略和专用硬件支持,可以适用于不同仿真场景下的电磁建模和等离子体参数反演。 展开更多
关键词 电流密度卷积时域有限差分(JEC-FDTD)方法 磁化等离子体 循环神经网络(RNN) 物理启发的机器学习算法 参数反演
在线阅读 下载PDF
基于混沌云量子蝙蝠CNN-GRU大坝变形智能预报方法研究 被引量:7
19
作者 陈以浩 李明伟 +2 位作者 安小刚 王宇田 徐瑞喆 《哈尔滨工程大学学报》 EI CAS CSCD 北大核心 2024年第1期110-118,共9页
针对大坝变形影响因素复杂、精准预报难度较大问题,为了提高在大坝安全管理过程中大坝变形的预报精度,本文从大坝变形非线性动力系统时间序列的强非线性出发,引入深度卷积神经网络,对大坝变形及其空间影响特性进行挖掘,引入门控循环单元... 针对大坝变形影响因素复杂、精准预报难度较大问题,为了提高在大坝安全管理过程中大坝变形的预报精度,本文从大坝变形非线性动力系统时间序列的强非线性出发,引入深度卷积神经网络,对大坝变形及其空间影响特性进行挖掘,引入门控循环单元,对大坝变形的时域特性进行挖掘,构建应用于大坝变形预报的深度卷积神经网络-门控循环单元大坝变形组合深度学习网络;同时,为了获取深度卷积神经网络-门控循环单元组合网络的最佳超参,引入了混沌云量子蝙蝠算法,建立了基于混沌云量子蝙蝠算法算法的深度卷积神经网络-门控循环单元组合网络超参优选方法;最后,提出了深度卷积神经网络-门控循环单元-混沌云量子蝙蝠算法大坝变形组合深度学习智能预报方法。基于实测数据开展预报研究,对比结果表明:与对比模型相比,提出的深度卷积神经网络-门控循环单元-混沌云量子蝙蝠算法预报方法取得了更精确的预报结果,混沌云量子蝙蝠算法算法用于超参优选获得了更佳的超参组合。 展开更多
关键词 大坝变形预测 卷积神经网络 门控循环单元 蝙蝠算法 量子力学 混沌理论 非线性动力系统模拟与预测 深度学习
在线阅读 下载PDF
基于RNN和K-means的音频智能分类方法
20
作者 胡彦红 范凯燕 《电声技术》 2024年第11期24-26,共3页
针对音频信号分类问题,提出一种结合循环神经网络(Recurrent Neural Networks,RNN)和K-means聚类算法的音频智能分类方法。该方法通过RNN模型提取音频信号的时间序列特征,利用K-means聚类算法聚类分析音频特征,以增强音频分类的健壮性... 针对音频信号分类问题,提出一种结合循环神经网络(Recurrent Neural Networks,RNN)和K-means聚类算法的音频智能分类方法。该方法通过RNN模型提取音频信号的时间序列特征,利用K-means聚类算法聚类分析音频特征,以增强音频分类的健壮性和全面性。使用Urban Sound8K数据集评估方法。结果显示,该方法在准确率、召回率、F_(1)值等指标上均优于标准RNN模型。 展开更多
关键词 循环神经网络(RNN) K-MEANS聚类算法 音频分类 机器学习
在线阅读 下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部