期刊文献+
共找到111篇文章
< 1 2 6 >
每页显示 20 50 100
A New Reinforcement Learning Method for Fuzzy Logic Controllers
1
作者 王直杰 方建安 邵世煌 《Journal of China Textile University(English Edition)》 EI CAS 1998年第2期42-45,共4页
A new reinforcement method for learning fuzzy logiccontrollers is proposed.The reinforcement learningscheme is composed of two fuzzy logic rule bases:oneacts as an adaptive critic,the other server as a control-ler.The... A new reinforcement method for learning fuzzy logiccontrollers is proposed.The reinforcement learningscheme is composed of two fuzzy logic rule bases:oneacts as an adaptive critic,the other server as a control-ler.The proposed method is tested on the cart-pole sys-tem.Simulation results show that the method has betterlearning performance than Anderson’s neural network-based method. 展开更多
关键词 reinforcement learning fuzzy logic control.
在线阅读 下载PDF
Inverse Reinforcement Learning Optimal Control for Takagi-Sugeno Fuzzy Systems
2
作者 Wenting SONG Shaocheng TONG 《Artificial Intelligence Science and Engineering》 2025年第2期134-146,共13页
Inverse reinforcement learning optimal control is under the framework of learner-expert.The learner system can imitate the expert system's demonstrated behaviors and does not require the predefined cost function,s... Inverse reinforcement learning optimal control is under the framework of learner-expert.The learner system can imitate the expert system's demonstrated behaviors and does not require the predefined cost function,so it can handle optimal control problems effectively.This paper proposes an inverse reinforcement learning optimal control method for Takagi-Sugeno(T-S)fuzzy systems.Based on learner systems,an expert system is constructed,where the learner system only knows the expert system's optimal control policy.To reconstruct the unknown cost function,we firstly develop a model-based inverse reinforcement learning algorithm for the case that systems dynamics are known.The developed model-based learning algorithm is consists of two learning stages:an inner reinforcement learning loop and an outer inverse optimal control loop.The inner loop desires to obtain optimal control policy via learner's cost function and the outer loop aims to update learner's state-penalty matrices via only using expert's optimal control policy.Then,to eliminate the requirement that the system dynamics must be known,a data-driven integral learning algorithm is presented.It is proved that the presented two algorithms are convergent and the developed inverse reinforcement learning optimal control scheme can ensure the controlled fuzzy learner systems to be asymptotically stable.Finally,we apply the proposed fuzzy optimal control to the truck-trailer system,and the computer simulation results verify the effectiveness of the presented approach. 展开更多
关键词 Takagi-Sugeno fuzzy systems learnerexpert framework inverse reinforcement learning algorithm optimal control
在线阅读 下载PDF
倒立摆的Reinforcement Learning模糊自适应控制 被引量:1
3
作者 廉自生 孟巧荣 《太原理工大学学报》 CAS 北大核心 2005年第4期405-408,共4页
根据Lagrange方程建立了单级倒立摆系统的数学模型,利用模糊自适应控制算法设计了倒立摆系统的控制器,并在Matlab的仿真模块中将倒立摆系统的数学模型和控制器结合起来,对倒立摆控制系统进行了仿真研究。结果表明,对于要求实时性较高的... 根据Lagrange方程建立了单级倒立摆系统的数学模型,利用模糊自适应控制算法设计了倒立摆系统的控制器,并在Matlab的仿真模块中将倒立摆系统的数学模型和控制器结合起来,对倒立摆控制系统进行了仿真研究。结果表明,对于要求实时性较高的非线性不稳定系统,用模糊自适应控制算法可以按照控制要求在线调节控制参数,在最短的调整时间内取得良好的控制效果。 展开更多
关键词 单级倒立摆 reinforcement learning 模糊自适应控制
在线阅读 下载PDF
Self-learning Fuzzy Controllers Based On a Real-time Reinforcement Genetic Algorithm
4
作者 方建安 苗清影 +1 位作者 郭钊侠 邵世煌 《Journal of Donghua University(English Edition)》 EI CAS 2002年第2期19-22,共4页
This paper presents a novel method for constructing fuzzy controllers based on a real time reinforcement genetic algorithm. This methodology introduces the real-time learning capability of neural networks into globall... This paper presents a novel method for constructing fuzzy controllers based on a real time reinforcement genetic algorithm. This methodology introduces the real-time learning capability of neural networks into globally searching process of genetic algorithm, aiming to enhance the convergence rate and real-time learning ability of genetic algorithm, which is then used to construct fuzzy controllers for complex dynamic systems without any knowledge about system dynamics and prior control experience. The cart-pole system is employed as a test bed to demonstrate the effectiveness of the proposed control scheme, and the robustness of the acquired fuzzy controller with comparable result. 展开更多
关键词 fuzzy controller self-learning REAL time reinforcement GENETIC algorithm
在线阅读 下载PDF
A Combined Reinforcement Learning and Sliding Mode Control Scheme for Grid Integration of a PV System 被引量:8
5
作者 Aurobinda Bag Bidyadhar Subudhi Pravat Kumar Ray 《CSEE Journal of Power and Energy Systems》 SCIE CSCD 2019年第4期498-506,共9页
The paper presents development of a reinforcement learning(RL)and sliding mode control(SMC)algorithm for a 3-phase PV system integrated to a grid.The PV system is integrated to grid through a voltage source inverter(V... The paper presents development of a reinforcement learning(RL)and sliding mode control(SMC)algorithm for a 3-phase PV system integrated to a grid.The PV system is integrated to grid through a voltage source inverter(VSI),in which PVVSI combination supplies active power and compensates reactive power of the local non-linear load connected to the point of common coupling(PCC).For extraction of maximum power from the PV panel,we develop a RL based maximum power point tracking(MPPT)algorithm.The instantaneous power theory(IPT)is adopted for generation reference inverter current(RIC).An SMC algorithm has been developed for injecting current to the local non-linear load at a reference value.The RL-SMC scheme is implemented in both simulation using MATLAB/SIMULINK software and on a prototype PV experimental.The performance of the proposed RL-SMC scheme is compared with that of fuzzy logic-sliding mode control(FL-SMC)and incremental conductance-sliding mode control(IC-SMC)algorithms.From the obtained results,it is observed that the proposed RL-SMC scheme provides better maximum power extraction and active power control than the FL-SMC and IC-SMC schemes. 展开更多
关键词 fuzzy logic incremental conductance instantaneous power theory reinforcement learning sliding mode controller
原文传递
Reactive fuzzy controller design by Q-learning for mobile robot navigation 被引量:5
6
作者 张文志 吕恬生 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2005年第3期319-324,共6页
In this paper a learning mechanism for reactive fuzzy controller design of a mobile robot navigating in unknown environments is proposed. The fuzzy logical controller is constructed based on the kinematics model of a ... In this paper a learning mechanism for reactive fuzzy controller design of a mobile robot navigating in unknown environments is proposed. The fuzzy logical controller is constructed based on the kinematics model of a real robot. The approach to learning the fuzzy rule base by relatively simple and less computational Q-learning is described in detail. After analyzing the credit assignment problem caused by the rules collision, a remedy is presented. Furthermore, time-varying parameters are used to increase the learning speed. Simulation results prove the mechanism can learn fuzzy navigation rules successfully only using scalar reinforcement signal and the rule base learned is proved to be correct and feasible on real robot platforms. 展开更多
关键词 fuzzy logical reinforcement learning Q-learning mobile robot NAVIGATION
在线阅读 下载PDF
Safe and efficient DRL driving policies using fuzzy logic for urban lane changing scenarios
7
作者 Ling Han Xiangyu Ma +4 位作者 Yiren Wang Lei He Yipeng Li Lele Zhang Qiang Yi 《Journal of Intelligent and Connected Vehicles》 2025年第1期59-71,共13页
Lane changing is common in driving.Thus,the possibility of traffic accidents occurring during lane changes is high given the complexity of this process.One of the primary objectives of intelligent driving is to increa... Lane changing is common in driving.Thus,the possibility of traffic accidents occurring during lane changes is high given the complexity of this process.One of the primary objectives of intelligent driving is to increase a vehicle’s behavior,making it more similar to that of a real driver.This study proposes a decision-making framework based on deep reinforcement learning(DRL)in a lane-changing scenario,which seeks to find a driving strategy that simultaneously considers the expected lane-changing risks and gains.First,a fuzzy logic lane-changing controller is designed.It outputs the corresponding safety and lane-change gain weights by inputting relevant driving parameters.Second,the obtained weights are brought into the constructed reward function of DRL.The model parameters are designed and trained on the basis of lane-changing behavior.Finally,we conducted experiments in a simulator to evaluate the performance of our developed algorithm in urban scenarios.To visualize and validate the estimated driving intentions,lane-changing strategies were tested under four scenarios.The results show that the average improvement in travel efficiency in the four scenarios is 19%.In addition,the average accident rate in the four scenarios increased by only 4%.We combine fuzzy logic and DRL reward functions to personify the lane-changing behavior of intelligent driving.Compared with conservative strategies that prioritize only safety,this method can considerably improve the number of lane changes and travel efficiency for autonomous vehicles(AVs)on the premise of ensuring safety.The approach provides an effective and explainable method designed for facilitating intelligent driving lane-changing behavior. 展开更多
关键词 autonomous vehicles(AVs) DECISION-MAKING fuzzy logic lane change reinforcement learning
在线阅读 下载PDF
Fuzzy Control of Chaotic System with Genetic Algorithm
8
作者 方建安 郭钊侠 邵世煌 《Journal of Donghua University(English Edition)》 EI CAS 2002年第3期58-62,共5页
A novel approach to control the unpredictable behavior of chaotic systems is presented. The control algorithm is based on fuzzy logic control technique combined with genetic algorithm. The use of fuzzy logic allows fo... A novel approach to control the unpredictable behavior of chaotic systems is presented. The control algorithm is based on fuzzy logic control technique combined with genetic algorithm. The use of fuzzy logic allows for the implementation of human "rule-of-thumb" approach to decision making by employing linguistic variables. An improved Genetic Algorithm (GA) is used to learn to optimally select the fuzzy membership functions of the linguistic labels in the condition portion of each rule, and to automatically generate fuzzy control actions under each condition. Simulation results show that such an approach for the control of chaotic systems is both effective and robust. 展开更多
关键词 fuzzy control CHAOTIC system GENETIC algorithm reinforcement learning.
在线阅读 下载PDF
TYPICAL ARCHITECTURES FOR FUZZY CONTROL
9
作者 Cai Zixing (Center for Intelligent Control, Central South University of Technology, Changsha 410083, China) 《Journal of Central South University》 SCIE EI CAS 1997年第2期132-136,共5页
Some typical structural schemes of Fuzzy control have been surveyed. Besides general structure of fuzzy logic controller (FLC), the structural schemes include PID fuzzy controller, self-organizing fuzzy controller, se... Some typical structural schemes of Fuzzy control have been surveyed. Besides general structure of fuzzy logic controller (FLC), the structural schemes include PID fuzzy controller, self-organizing fuzzy controller, selftuning fuzzy controller, self-learning fuzzy controller, and expect fuzzy controller, etc. This survey focuses on the control principle, and provides a basis for potential applications. Most of the structures have been used in various control fields, one of application areas is in the metallurgy industry, e. g., the temperature control of the electric furnace, the control of the aluminum smelting process, etc. According to the application requirements, one can choose a structural scheme for special use. 展开更多
关键词 fuzzy logic controller (FLC) SELF-ORGANIZING SELF-TUNING SELF-learning EXPERT system
在线阅读 下载PDF
A Vision-based Robotic Navigation Method Using an Evolutionary and Fuzzy Q-Learning Approach
10
作者 Roberto Cuesta-Solano Ernesto Moya-Albor +1 位作者 Jorge Brieva Hiram Ponce 《Journal of Artificial Intelligence and Technology》 2024年第4期363-369,共7页
The paper presents a fuzzy Q-learning(FQL)and optical flow-based autonomous navigation approach.The FQL method takes decisions in an unknown environment and without mapping,using motion information and through a reinf... The paper presents a fuzzy Q-learning(FQL)and optical flow-based autonomous navigation approach.The FQL method takes decisions in an unknown environment and without mapping,using motion information and through a reinforcement signal into an evolutionary algorithm.The reinforcement signal is calculated by estimating the optical flow densities in areas of the camera to determine whether they are“dense”or“thin”which has a relationship with the proximity of objects.The results obtained show that the present approach improves the rate of learning compared with a method with a simple reward system and without the evolutionary component.The proposed system was implemented in a virtual robotics system using the CoppeliaSim software and in communication with Python. 展开更多
关键词 CoppeliaSim evolutionary algorithm fuzzy Q-learning optical flow reinforced learning vision-based control navigation
在线阅读 下载PDF
RSOFCPN:CONTROL SYSTEM STRUCTURE ANDALGORITHM DESIGN
11
作者 马勇 杨煜普 +1 位作者 张卫东 许晓鸣 《Journal of Shanghai Jiaotong university(Science)》 EI 2000年第2期57-61,共5页
A stable control scheme for a class of unknown nonlinear systems was presented. The control architecture is composed of two parts, the fuzzy sliding mode controller (FSMC) is applied to drive the state to a designed s... A stable control scheme for a class of unknown nonlinear systems was presented. The control architecture is composed of two parts, the fuzzy sliding mode controller (FSMC) is applied to drive the state to a designed switching hyperplane, and a reinforcement self organizing fuzzy CPN (RSOFCPN) as a feedforward compensator is used to reduce the influence of system uncertainties. The simulation results demonstrate the effectiveness of the proposed control scheme. 展开更多
关键词 nonlinear systems fuzzy SLIDING mode control self ORGANIZED CPN reinforcement learning Document code:A
在线阅读 下载PDF
软件定义智能控制系统
12
作者 柴天佑 郑锐 +3 位作者 贾瑶 黄新宇 郑秀萍 李智 《自动化学报》 北大核心 2025年第10期2232-2244,共13页
针对可编程逻辑控制器(PLC)和虚拟PLC的PID难以优化整定的难题,将建模、控制、优化和深度学习与强化学习相结合,提出无模型PID在线自优化整定算法.将工业云及边缘计算、软件定义实时及可靠保障机制的双通道通信架构与所提出的PID整定算... 针对可编程逻辑控制器(PLC)和虚拟PLC的PID难以优化整定的难题,将建模、控制、优化和深度学习与强化学习相结合,提出无模型PID在线自优化整定算法.将工业云及边缘计算、软件定义实时及可靠保障机制的双通道通信架构与所提出的PID整定算法相结合,提出云端协同的软件定义智能控制系统.云为基于云服务器的智能控制软件开发平台;端为基于工业服务器的智能控制软件.智能控制软件包括虚拟PLC PID、PID预优化整定和控制过程数字孪生以及在线自优化整定、自适应切换机制.采用研制的软件定义智能控制系统研究实验平台,进行所提出的控制系统与国外先进PLC和工业PC的无模型整定软件PID控制系统的仿真与物理对比实验.实验结果表明本文的软件定义智能控制系统可进行控制器参数自优化整定,控制性能显著优于国外无模型整定软件的PID控制系统. 展开更多
关键词 深度学习 强化学习 数字孪生 可编程逻辑控制器 软件定义智能控制
在线阅读 下载PDF
变体飞行器强化学习自适应抗扰动控制方法
13
作者 程昊宇 张笑妍 +2 位作者 刘昕 黄汉桥 闫杰 《宇航学报》 北大核心 2025年第5期977-990,共14页
针对变体飞行器控制系统设计中安全性、抗扰性和最优性难以兼顾的问题,提出了基于强化学习的自适应抗扰控制方法。首先建立了存在不确定情况下变体飞行器的动力学模型,基于模糊控制理论设计了模糊干扰观测器,保证对系统不确定性和干扰... 针对变体飞行器控制系统设计中安全性、抗扰性和最优性难以兼顾的问题,提出了基于强化学习的自适应抗扰控制方法。首先建立了存在不确定情况下变体飞行器的动力学模型,基于模糊控制理论设计了模糊干扰观测器,保证对系统不确定性和干扰的估计误差能够收敛至原点邻域内。为了解决飞行安全性、最优性和抗扰性难以兼顾的问题,将高阶非线性系统的最优性问题转化为子系统控制量的最优化设计问题。基于强化学习框架求解Hamilton-Jacobi-Bellman方程。针对方程求解时难以处理的系统非线性特性,设计基于神经网络的actor-critic策略。在反步设计中利用actor网络生成控制量,基于critic网络评估控制性能,基于障碍Lyapunov函数方法开展稳定性分析,保证系统的稳定性、最优性和状态约束。最后,通过仿真验证所提方法的有效性。 展开更多
关键词 变体飞行器 强化学习 自适应控制 模糊干扰观测器 最优抗扰
在线阅读 下载PDF
基于模糊强化学习和模型预测控制的追逃博弈
14
作者 胡鹏林 潘泉 赵春晖 《控制与决策》 北大核心 2025年第6期1855-1865,共11页
针对三维空间中智能体追逃博弈策略制定与鲁棒控制问题,提出一种基于模糊强化学习与模型预测控制(MPC)的分层追逃博弈框架.所提出框架结合三维空间的阿氏圆和模糊行动者-评论家学习(FACL)算法获得智能体的运动信息,并将其用作MPC算法的... 针对三维空间中智能体追逃博弈策略制定与鲁棒控制问题,提出一种基于模糊强化学习与模型预测控制(MPC)的分层追逃博弈框架.所提出框架结合三维空间的阿氏圆和模糊行动者-评论家学习(FACL)算法获得智能体的运动信息,并将其用作MPC算法的参考输入来设计四旋翼无人机的控制器.通过对四旋翼欠驱动系统模型进行解耦,设计考虑误差系统积分项的高度、平移和姿态控制器.通过FACL算法提供的参考信息,有效提高了MPC算法的控制效率.仿真和实验结果表明,所设计的分层框架可以很好地解决三维空间追逃博弈问题. 展开更多
关键词 三维追逃博弈 阿氏圆 模糊强化学习 模型预测控制
原文传递
基于IT2FBLS强化学习PID的MSWI过程炉膛温度控制
15
作者 田昊 汤健 +3 位作者 夏恒 王天峥 余文 乔俊飞 《自动化学报》 北大核心 2025年第7期1626-1641,共16页
城市固废焚烧(MSWI)过程中固有的非线性、时变性和不确定性导致领域专家需要凭借经验通过高频率手动干预进行炉膛温度控制.针对上述问题,为模拟专家的自适应机制,提出基于强化学习的比例-积分-微分(PID)自整定控制策略,即采用共享机制区... 城市固废焚烧(MSWI)过程中固有的非线性、时变性和不确定性导致领域专家需要凭借经验通过高频率手动干预进行炉膛温度控制.针对上述问题,为模拟专家的自适应机制,提出基于强化学习的比例-积分-微分(PID)自整定控制策略,即采用共享机制区间II型模糊宽度学习系统(IT2FBLS)拟合Actor-critic网络(ACN)进行PID参数优化.首先,采用共享机制IT2FBLS拟合ACN以克服焚烧过程的不确定性、减少计算消耗和确保紧凑的网络结构;然后,利用基于时间差分误差的梯度下降法更新ACN参数以实现快速学习;最后,利用李雅普诺夫方法,证明Actor-critic算法的收敛性和控制过程的稳定性.通过MSWI过程的实际运行数据仿真验证了该方法的有效性. 展开更多
关键词 城市固废焚烧 炉膛温度控制 强化学习 区间Ⅱ型模糊宽度学习系统 Actor-critic网络 共享机制 PID参数优化
在线阅读 下载PDF
基于深度强化学习的连续微流控生物芯片控制逻辑布线
16
作者 蔡华洋 黄兴 刘耿耿 《计算机研究与发展》 北大核心 2025年第4期950-962,共13页
随着电子设计自动化技术的迅速发展,连续微流控生物芯片成为了目前最具前景的生化实验平台之一.该芯片通过采用内部的微阀门以及微通道来操纵体积仅为毫升或纳升的流体样品,从而自动执行混合和检测等基本的生化实验操作.为了实现正确的... 随着电子设计自动化技术的迅速发展,连续微流控生物芯片成为了目前最具前景的生化实验平台之一.该芯片通过采用内部的微阀门以及微通道来操纵体积仅为毫升或纳升的流体样品,从而自动执行混合和检测等基本的生化实验操作.为了实现正确的生化测定功能,部署于芯片内部的微阀门通常需要由基于多路复用器的控制逻辑进行管控,其通过控制通道获得来自核心输入的控制信号以实现精确切换.由于生化反应通常需要非常高的灵敏度,因此为了保证信号的即时传输,需要尽可能地减少连接每个阀门的控制路径长度,以降低信号传输的时延.此外,为了降低芯片的制造成本,如何有效减少控制逻辑中通道的总长度也是逻辑架构设计需要解决的关键问题之一.针对上述问题,提出了一种基于深度强化学习的控制逻辑布线算法以最小化信号传输时延以及控制通道总长度,从而自动构建高效的控制通道网络.该算法采用竞争深度Q网络架构作为深度强化学习框架的智能体,从而对信号传输时延和通道总长度进行权衡评估.此外,针对控制逻辑首次实现了对角型的通道布线,从根本上提高了阀门切换操作的效率并降低了芯片的制造成本.实验结果表明,所提出的算法能够有效构建高性能、低成本的控制逻辑架构. 展开更多
关键词 连续微流控生物芯片 深度强化学习 控制逻辑 控制通道网络 对角通道布线
在线阅读 下载PDF
基于交通流预测的交通信号灯控制研究
17
作者 付韵竹 孙海义 +1 位作者 吴泉江 张清晨 《科学技术创新》 2025年第3期93-96,共4页
随着道路上车流量的不断增多,交通拥堵问题愈加严重。本文提出了一种基于交通流预测对交通信号灯进行控制的一种方法——TFPLight,可以根据交通流量的预测结果提前控制交通信号灯,同时可以根据车流量多少调整绿灯的持续时间:先对交通流... 随着道路上车流量的不断增多,交通拥堵问题愈加严重。本文提出了一种基于交通流预测对交通信号灯进行控制的一种方法——TFPLight,可以根据交通流量的预测结果提前控制交通信号灯,同时可以根据车流量多少调整绿灯的持续时间:先对交通流量进行预测,把结果应用于交通环境当中,加入图注意力机制以实时应对交通环境的变化,通过深度Q学习输出交通信号灯的相序,模糊逻辑算法输出绿灯的持续时间,此方法已在大规模路网进行实验,结果表明与其他几个基线模型相比所提出的模型具有更好的性能。 展开更多
关键词 交通信号灯控制 深度Q学习 模糊逻辑算法 图注意力机制
在线阅读 下载PDF
面向多目标协同搜索的多无人船模糊满意强化学习方法
18
作者 胡超芳 朱琦 《天津大学学报(自然科学与工程技术版)》 北大核心 2025年第11期1132-1144,共13页
无人船因其高效率、低成本、强抗风险的特点,被广泛应用于各种复杂环境中执行海洋任务.针对多无人船在未知水域内的多目标协同搜索问题,提出了一种基于模糊满意多指标优化和双经验回放池的改进强化学习方法.首先构建了包含环境认知度和... 无人船因其高效率、低成本、强抗风险的特点,被广泛应用于各种复杂环境中执行海洋任务.针对多无人船在未知水域内的多目标协同搜索问题,提出了一种基于模糊满意多指标优化和双经验回放池的改进强化学习方法.首先构建了包含环境认知度和目标存在概率两个信息指标的二维栅格环境地图.其次针对单经验回放池随机采样数据训练效率低的问题,提出使用双经验回放池分类存储数据,为提高初期训练速度和后期稳定性,按照时变比例分别调用数据改进训练.此外,为实现对目标的快速搜索,同时保证搜索区域的覆盖度和无人船间的安全避撞,提出了目标存在概率变化量、环境搜索覆盖度和无人船分布距离3个奖励函数.为满足3个奖励函数重要性等级要求,使用基于松弛优先级满意度的模糊多指标优化方法对奖励函数进行重新建模,从而形成了改进模糊满意D3QN算法.最后,对所提算法的有效性和不同数量目标搜索任务的适用性进行仿真验证,证实了算法可以满足设计要求.同时,考虑到无人船实际底层控制误差对上层搜索算法的影响,将所提模糊满意强化学习算法用做上层规划与下层线性自抗扰控制结合,进行了多目标协同搜索的应用仿真验证,并与其他强化学习方法进行了对比.结果表明:使用所提算法不但可以对环境内的多个未知目标实现快速有效搜索,而且可以有效适应实际控制误差的存在,所提算法在搜索速度、环境搜索覆盖度和无人船分布性上均优于对比算法. 展开更多
关键词 无人船 协同搜索 强化学习 模糊满意优化 线性自抗扰控制
在线阅读 下载PDF
基于深度强化学习和神经模糊系统的可控换相换流器混合自适应控制框架
19
作者 周亮 任佳丽 +2 位作者 张俊 查鲲鹏 刘虹 《电气自动化》 2025年第5期47-49,53,共4页
为提升可控换相换流器在复杂电网环境中的控制性能,提出一种基于深度强化学习与神经模糊系统的换流器混合自适应控制框架。首先,设计基于深度Q网络算法的动态控制策略;其次,利用在线学习机制动态调整模糊规则,构建自适应神经模糊控制策... 为提升可控换相换流器在复杂电网环境中的控制性能,提出一种基于深度强化学习与神经模糊系统的换流器混合自适应控制框架。首先,设计基于深度Q网络算法的动态控制策略;其次,利用在线学习机制动态调整模糊规则,构建自适应神经模糊控制策略;最后,采用全局-局部动态调节与双层学习的协同工作机制,实现全局优化控制和局部精细调节的融合。仿真试验结果表明,相比比例-积分控制器、模糊控制器、自适应神经模糊控制器和深度强化学习控制器,所提混合自适应控制框架在所有测试场景中的响应速度、能量效率、总谐波失真以及稳态误差均表现最佳。 展开更多
关键词 可控换相换流器 控制策略 深度强化学习 自适应神经模糊控制 深度Q网络 在线学习机制
在线阅读 下载PDF
污水处理智能控制系统构建及优化
20
作者 赵银中 《环境保护与循环经济》 2025年第3期82-84,89,共4页
针对目前污水处理厂运行成本高、能耗大、监测滞后及控制精度低等问题,基于模糊逻辑、神经网络、遗传算法等智能学习算法,构建多元智能软测量动态预测模型及控制系统,并结合嵌入式组态技术,开发污水处理多元智能控制系统,实现污水处理... 针对目前污水处理厂运行成本高、能耗大、监测滞后及控制精度低等问题,基于模糊逻辑、神经网络、遗传算法等智能学习算法,构建多元智能软测量动态预测模型及控制系统,并结合嵌入式组态技术,开发污水处理多元智能控制系统,实现污水处理系统高效、节能、稳定的运行。 展开更多
关键词 污水处理 多元智能控制 深度学习 遗传算法 模糊逻辑 神经网络
在线阅读 下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部