Decision-making of connected and automated vehicles(CAV)includes a sequence of driving maneuvers that improve safety and efficiency,characterized by complex scenarios,strong uncertainty,and high real-time requirements...Decision-making of connected and automated vehicles(CAV)includes a sequence of driving maneuvers that improve safety and efficiency,characterized by complex scenarios,strong uncertainty,and high real-time requirements.Deep reinforcement learning(DRL)exhibits excellent capability of real-time decision-making and adaptability to complex scenarios,and generalization abilities.However,it is arduous to guarantee complete driving safety and efficiency under the constraints of training samples and costs.This paper proposes a Mixture of Expert method(MoE)based on Soft Actor-Critic(SAC),where the upper-level discriminator dynamically decides whether to activate the lower-level DRL expert or the heuristic expert based on the features of the input state.To further enhance the performance of the DRL expert,a buffer zone is introduced in the reward function,preemptively applying penalties before insecure situations occur.In order to minimize collision and off-road rates,the Intelligent Driver Model(IDM)and Minimizing Overall Braking Induced by Lane changes(MOBIL)strategy are designed by heuristic experts.Finally,tested in typical simulation scenarios,MOE shows a 13.75%improvement in driving efficiency compared with the traditional DRL method with continuous action space.It ensures high safety with zero collision and zero off-road rates while maintaining high adaptability.展开更多
Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, w...Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, we propose an off-policy heterogeneous actor-critic(HAC) algorithm, which contains soft Q-function and ordinary Q-function. The soft Q-function encourages the exploration of a Gaussian policy, and the ordinary Q-function optimizes the mean of the Gaussian policy to improve the training efficiency. Experience replay memory is another vital component of off-policy RL methods. We propose a new sampling technique that emphasizes recently experienced transitions to boost the policy training. Besides, we integrate HAC with hindsight experience replay(HER) to deal with sparse reward tasks, which are common in the robotic manipulation domain. Finally, we evaluate our methods on a series of continuous control benchmark tasks and robotic manipulation tasks. The experimental results show that our method outperforms prior state-of-the-art methods in terms of training efficiency and performance, which validates the effectiveness of our method.展开更多
Peer-to-peer(P2P)energy trading in active distribution networks(ADNs)plays a pivotal role in promoting the efficient consumption of renewable energy sources.However,it is challenging to effectively coordinate the powe...Peer-to-peer(P2P)energy trading in active distribution networks(ADNs)plays a pivotal role in promoting the efficient consumption of renewable energy sources.However,it is challenging to effectively coordinate the power dispatch of ADNs and P2P energy trading while preserving the privacy of different physical interests.Hence,this paper proposes a soft actor-critic algorithm incorporating distributed trading control(SAC-DTC)to tackle the optimal power dispatch of ADNs and the P2P energy trading considering privacy preservation among prosumers.First,the soft actor-critic(SAC)algorithm is used to optimize the control strategy of device in ADNs to minimize the operation cost,and the primary environmental information of the ADN at this point is published to prosumers.Then,a distributed generalized fast dual ascent method is used to iterate the trading process of prosumers and maximize their revenues.Subsequently,the results of trading are encrypted based on the differential privacy technique and returned to the ADN.Finally,the social welfare value consisting of ADN operation cost and P2P market revenue is utilized as a reward value to update network parameters and control strategies of the deep reinforcement learning.Simulation results show that the proposed SAC-DTC algorithm reduces the ADN operation cost,boosts the P2P market revenue,maximizes the social welfare,and exhibits high computational accuracy,demonstrating its practical application to the operation of power systems and power markets.展开更多
Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency...Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.展开更多
基金Supported by National Key R&D Program of China(Grant No.2022YFB2503203)National Natural Science Foundation of China(Grant No.U1964206).
文摘Decision-making of connected and automated vehicles(CAV)includes a sequence of driving maneuvers that improve safety and efficiency,characterized by complex scenarios,strong uncertainty,and high real-time requirements.Deep reinforcement learning(DRL)exhibits excellent capability of real-time decision-making and adaptability to complex scenarios,and generalization abilities.However,it is arduous to guarantee complete driving safety and efficiency under the constraints of training samples and costs.This paper proposes a Mixture of Expert method(MoE)based on Soft Actor-Critic(SAC),where the upper-level discriminator dynamically decides whether to activate the lower-level DRL expert or the heuristic expert based on the features of the input state.To further enhance the performance of the DRL expert,a buffer zone is introduced in the reward function,preemptively applying penalties before insecure situations occur.In order to minimize collision and off-road rates,the Intelligent Driver Model(IDM)and Minimizing Overall Braking Induced by Lane changes(MOBIL)strategy are designed by heuristic experts.Finally,tested in typical simulation scenarios,MOE shows a 13.75%improvement in driving efficiency compared with the traditional DRL method with continuous action space.It ensures high safety with zero collision and zero off-road rates while maintaining high adaptability.
基金supported by National Key Research and Development Program of China(NO.2018AAA0103003)National Natural Science Foundation of China(NO.61773378)+1 种基金Basic Research Program(NO.JCKY*******B029)Strategic Priority Research Program of Chinese Academy of Science(NO.XDB32050100).
文摘Reinforcement learning(RL) algorithms have been demonstrated to solve a variety of continuous control tasks. However,the training efficiency and performance of such methods limit further applications. In this paper, we propose an off-policy heterogeneous actor-critic(HAC) algorithm, which contains soft Q-function and ordinary Q-function. The soft Q-function encourages the exploration of a Gaussian policy, and the ordinary Q-function optimizes the mean of the Gaussian policy to improve the training efficiency. Experience replay memory is another vital component of off-policy RL methods. We propose a new sampling technique that emphasizes recently experienced transitions to boost the policy training. Besides, we integrate HAC with hindsight experience replay(HER) to deal with sparse reward tasks, which are common in the robotic manipulation domain. Finally, we evaluate our methods on a series of continuous control benchmark tasks and robotic manipulation tasks. The experimental results show that our method outperforms prior state-of-the-art methods in terms of training efficiency and performance, which validates the effectiveness of our method.
基金supported by the National Natural Science Foundation of China(No.52177085).
文摘Peer-to-peer(P2P)energy trading in active distribution networks(ADNs)plays a pivotal role in promoting the efficient consumption of renewable energy sources.However,it is challenging to effectively coordinate the power dispatch of ADNs and P2P energy trading while preserving the privacy of different physical interests.Hence,this paper proposes a soft actor-critic algorithm incorporating distributed trading control(SAC-DTC)to tackle the optimal power dispatch of ADNs and the P2P energy trading considering privacy preservation among prosumers.First,the soft actor-critic(SAC)algorithm is used to optimize the control strategy of device in ADNs to minimize the operation cost,and the primary environmental information of the ADN at this point is published to prosumers.Then,a distributed generalized fast dual ascent method is used to iterate the trading process of prosumers and maximize their revenues.Subsequently,the results of trading are encrypted based on the differential privacy technique and returned to the ADN.Finally,the social welfare value consisting of ADN operation cost and P2P market revenue is utilized as a reward value to update network parameters and control strategies of the deep reinforcement learning.Simulation results show that the proposed SAC-DTC algorithm reduces the ADN operation cost,boosts the P2P market revenue,maximizes the social welfare,and exhibits high computational accuracy,demonstrating its practical application to the operation of power systems and power markets.
文摘Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.
文摘大规模阵列天线技术(Massive Multiple Input Multiple Output,Massive MIMO)作为第五代移动通信(5G)的无线核心技术,实现了多波束空间覆盖增强,然而5G Massive MIMO的多波束射频高能耗、多波束碰撞和增加的干扰造会成5G网络能效下降,运营成本增高。基于3D数字地图、基站工程参数、终端上报的测量报告/最小化路测(Measurement Report/Minimization of Drive Test,MR/MDT)数据、用户/业务分布构建的三维数字孪生栅格,通过卷积长短期记忆(Convolutional Long Short Term Memory,Conv-LSTM)算法对栅格内的用户分布、业务分布进行分析和预测,通过Actor-Critic架构对5G波束配置和优化策略进行评估,实现不同场景、时段的5G波束最佳能效,智能适应5G网络潮汐效应,实现“网随业动”。