针对光储直流微电网易受光伏资源波动、负荷侧波动等不确定扰动影响,进而引发的直流母线电压波动问题,在传统自抗扰控制(linear active disturbance rejection control,LADRC)的基础上,提出一种参数动态协同自抗扰控制(dynamic coordina...针对光储直流微电网易受光伏资源波动、负荷侧波动等不确定扰动影响,进而引发的直流母线电压波动问题,在传统自抗扰控制(linear active disturbance rejection control,LADRC)的基础上,提出一种参数动态协同自抗扰控制(dynamic coordination of parameters for active disturbance rejection control,DCLADRC),引入两个新的观测变量并增加一维带宽参数,旨在通过深度确定性策略梯度(deterministic policy gradient,DDPG)算法动态调整两级带宽间的协调因子k,提高观测器多频域扰动下的观测精度及收敛速度,优化控制器的抗扰性,增强母线电压稳定性,从而使得储能能够更好地发挥“削峰填谷”的调节作用。物理实验结果表明,受到扰动后,对比LADRC与双闭环比例积分(double closed loop proportion-integration,Double_PI)控制,所提的DCLADRC电压偏移量分别减少了75%和83%。展开更多
针对现有基于深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法的再入制导方法计算精度较差,对强扰动条件适应性不足等问题,在DDPG算法训练框架的基础上,提出一种基于长短期记忆-DDPG(long short term memory-DDPG,LST...针对现有基于深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法的再入制导方法计算精度较差,对强扰动条件适应性不足等问题,在DDPG算法训练框架的基础上,提出一种基于长短期记忆-DDPG(long short term memory-DDPG,LSTM-DDPG)的再入制导方法。该方法采用纵、侧向制导解耦设计思想,在纵向制导方面,首先针对再入制导问题构建强化学习所需的状态、动作空间;其次,确定决策点和制导周期内的指令计算策略,并设计考虑综合性能的奖励函数;然后,引入LSTM网络构建强化学习训练网络,进而通过在线更新策略提升算法的多任务适用性;侧向制导则采用基于横程误差的动态倾侧反转方法,获得倾侧角符号。以美国超音速通用飞行器(common aero vehicle-hypersonic,CAV-H)再入滑翔为例进行仿真,结果表明:与传统数值预测-校正方法相比,所提制导方法具有相当的终端精度和更高的计算效率优势;与现有基于DDPG算法的再入制导方法相比,所提制导方法具有相当的计算效率以及更高的终端精度和鲁棒性。展开更多
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ...The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.展开更多
We propose a new architecture of truck-based mobile energy couriers(MEC)for power distribution networks with high penetration of renewable energy sources(RES).Each MEC is a truck equipped with high-density inverters,c...We propose a new architecture of truck-based mobile energy couriers(MEC)for power distribution networks with high penetration of renewable energy sources(RES).Each MEC is a truck equipped with high-density inverters,converters,capacitor banks,and energy storage devices.The MEC platform can improve the flexibility,resilience,and RES hosting capability of a distribution grid through spatial-temporal energy reallocation based on the stochastic behaviors of RES and loads.The employment of MEC necessitates the development of complex scheduling and control schemes that can adaptively cope with the dynamic natures of both the power grid and the transportation network.The problem is formulated as a non-convex optimization problem to minimize the total generation cost,subject to the various constraints imposed by conventional and renewable energy sources,energy storage,and transportation networks,etc.The problem is solved by combining optimal power flow(OPF)with deep reinforcement learning(DRL)under the framework of deep deterministic policy gradient(DDPG).Simulation results demonstrate that the proposed MEC platform with DDPG can achieve significant cost reduction compared to conventional systems with static energy storage.展开更多
针对现阶段外骨骼机器人轨迹运动时出现效果不佳的问题,提出了基于优先经验回放与分区奖励(PERDA)融合的深度确定性策略梯度(DDPG)强化学习算法,即PERDA-DDPG。该方法利用时间差分误差(TD-errors)的大小对经验排序,改变了原始采样的策...针对现阶段外骨骼机器人轨迹运动时出现效果不佳的问题,提出了基于优先经验回放与分区奖励(PERDA)融合的深度确定性策略梯度(DDPG)强化学习算法,即PERDA-DDPG。该方法利用时间差分误差(TD-errors)的大小对经验排序,改变了原始采样的策略。此外,相较于以往二值奖励函数,本文根据物理模型提出针对化的分区奖励。在Open AI Gym平台上实现仿真环境,实验结果表明:改进的算法收敛速度提升了约9.2%,学习过程更加稳定。展开更多
文摘针对光储直流微电网易受光伏资源波动、负荷侧波动等不确定扰动影响,进而引发的直流母线电压波动问题,在传统自抗扰控制(linear active disturbance rejection control,LADRC)的基础上,提出一种参数动态协同自抗扰控制(dynamic coordination of parameters for active disturbance rejection control,DCLADRC),引入两个新的观测变量并增加一维带宽参数,旨在通过深度确定性策略梯度(deterministic policy gradient,DDPG)算法动态调整两级带宽间的协调因子k,提高观测器多频域扰动下的观测精度及收敛速度,优化控制器的抗扰性,增强母线电压稳定性,从而使得储能能够更好地发挥“削峰填谷”的调节作用。物理实验结果表明,受到扰动后,对比LADRC与双闭环比例积分(double closed loop proportion-integration,Double_PI)控制,所提的DCLADRC电压偏移量分别减少了75%和83%。
文摘针对现有基于深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法的再入制导方法计算精度较差,对强扰动条件适应性不足等问题,在DDPG算法训练框架的基础上,提出一种基于长短期记忆-DDPG(long short term memory-DDPG,LSTM-DDPG)的再入制导方法。该方法采用纵、侧向制导解耦设计思想,在纵向制导方面,首先针对再入制导问题构建强化学习所需的状态、动作空间;其次,确定决策点和制导周期内的指令计算策略,并设计考虑综合性能的奖励函数;然后,引入LSTM网络构建强化学习训练网络,进而通过在线更新策略提升算法的多任务适用性;侧向制导则采用基于横程误差的动态倾侧反转方法,获得倾侧角符号。以美国超音速通用飞行器(common aero vehicle-hypersonic,CAV-H)再入滑翔为例进行仿真,结果表明:与传统数值预测-校正方法相比,所提制导方法具有相当的终端精度和更高的计算效率优势;与现有基于DDPG算法的再入制导方法相比,所提制导方法具有相当的计算效率以及更高的终端精度和鲁棒性。
基金supported by the Key Research and Development Program of Shaanxi(2022GY-089)the Natural Science Basic Research Program of Shaanxi(2022JQ-593).
文摘The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process.
文摘We propose a new architecture of truck-based mobile energy couriers(MEC)for power distribution networks with high penetration of renewable energy sources(RES).Each MEC is a truck equipped with high-density inverters,converters,capacitor banks,and energy storage devices.The MEC platform can improve the flexibility,resilience,and RES hosting capability of a distribution grid through spatial-temporal energy reallocation based on the stochastic behaviors of RES and loads.The employment of MEC necessitates the development of complex scheduling and control schemes that can adaptively cope with the dynamic natures of both the power grid and the transportation network.The problem is formulated as a non-convex optimization problem to minimize the total generation cost,subject to the various constraints imposed by conventional and renewable energy sources,energy storage,and transportation networks,etc.The problem is solved by combining optimal power flow(OPF)with deep reinforcement learning(DRL)under the framework of deep deterministic policy gradient(DDPG).Simulation results demonstrate that the proposed MEC platform with DDPG can achieve significant cost reduction compared to conventional systems with static energy storage.
文摘针对现阶段外骨骼机器人轨迹运动时出现效果不佳的问题,提出了基于优先经验回放与分区奖励(PERDA)融合的深度确定性策略梯度(DDPG)强化学习算法,即PERDA-DDPG。该方法利用时间差分误差(TD-errors)的大小对经验排序,改变了原始采样的策略。此外,相较于以往二值奖励函数,本文根据物理模型提出针对化的分区奖励。在Open AI Gym平台上实现仿真环境,实验结果表明:改进的算法收敛速度提升了约9.2%,学习过程更加稳定。