The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co...The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.展开更多
针对多无人战车陆上突防作战时如何根据实时态势进行协同智能决策这一问题,结合多智能体无人战车突防作战过程建立马尔可夫(MDP)模型,并基于多智能体深度确定性策略梯度算法(Multi-agent Deep Deterministic Policy Gradient,MADDPG)提...针对多无人战车陆上突防作战时如何根据实时态势进行协同智能决策这一问题,结合多智能体无人战车突防作战过程建立马尔可夫(MDP)模型,并基于多智能体深度确定性策略梯度算法(Multi-agent Deep Deterministic Policy Gradient,MADDPG)提出多无人战车协同突防决策方法。针对多智能体决策时智能体策略变化互相影响的问题,通过在算法的AC结构中引入自注意力机制,使每个智能体进行决策和策略评估时更加关注那些对其影响较大的智能体;并采用自注意力机制计算每个智能体的回报权值,按照每个智能体自身贡献进行回报分配,提升了战车间的协同性;最后通过在想定环境中进行实验,验证了多战车协同突防决策方法的有效性。展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for ...With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for communication and computation to build air-ground integrated networks(AGINs)offers a promising solution for seamless network coverage of remote internet of things(IoT)devices in the future.To address the performance demands of future mobile devices(MDs),we proposed an MEC-assisted AGIN system.The goal is to minimize the long-term computational overhead of MDs by jointly optimizing transmission power,flight trajecto-ries,resource allocation,and offloading ratios,while utilizing non-orthogonal multiple access(NOMA)to improve device connectivity of large-scale MDs and spectral efficiency.We first designed an adaptive clustering scheme based on K-Means to cluster MDs and established commu-nication links,improving efficiency and load balancing.Then,considering system dynamics,we introduced a partial computation offloading algorithm based on multi-agent deep deterministic pol-icy gradient(MADDPG),modeling the multi-UAV computation offloading problem as a Markov decision process(MDP).This algorithm optimizes resource allocation through centralized training and distributed execution,reducing computational overhead.Simulation results show that the pro-posed algorithm not only converges stably but also outperforms other benchmark algorithms in han-dling complex scenarios with multiple devices.展开更多
为提高多无人船编队系统的导航能力,提出了一种基于注意力机制的多智能体深度确定性策略梯度(ATMADDPG:Attention Mechanism based Multi-Agent Deep Deterministic Policy Gradient)算法。该算法在训练阶段,通过大量试验训练出最佳策略...为提高多无人船编队系统的导航能力,提出了一种基于注意力机制的多智能体深度确定性策略梯度(ATMADDPG:Attention Mechanism based Multi-Agent Deep Deterministic Policy Gradient)算法。该算法在训练阶段,通过大量试验训练出最佳策略,并在实验阶段直接使用训练出的最佳策略得到最佳编队路径。仿真实验将4艘相同的“百川号”无人船作为实验对象。实验结果表明,基于ATMADDPG算法的队形保持策略能实现稳定的多无人船编队导航,并在一定程度上满足队形保持的要求。相较于多智能体深度确定性策略梯度(MADDPG:Multi-Agent Depth Deterministic Policy Gradient)算法,所提出的ATMADDPG算法在收敛速度、队形保持能力和对环境变化的适应性等方面表现出更优越的性能,综合导航效率可提高约80%,具有较大的应用潜力。展开更多
In this paper,a day-ahead electricity market bidding problem with multiple strategic generation company(GEN-CO)bidders is studied.The problem is formulated as a Markov game model,where GENCO bidders interact with each...In this paper,a day-ahead electricity market bidding problem with multiple strategic generation company(GEN-CO)bidders is studied.The problem is formulated as a Markov game model,where GENCO bidders interact with each other to develop their optimal day-ahead bidding strategies.Considering unobservable information in the problem,a model-free and data-driven approach,known as multi-agent deep deterministic policy gradient(MADDPG),is applied for approximating the Nash equilibrium(NE)in the above Markov game.The MAD-DPG algorithm has the advantage of generalization due to the automatic feature extraction ability of the deep neural networks.The algorithm is tested on an IEEE 30-bus system with three competitive GENCO bidders in both an uncongested case and a congested case.Comparisons with a truthful bidding strategy and state-of-the-art deep reinforcement learning methods including deep Q network and deep deterministic policy gradient(DDPG)demonstrate that the applied MADDPG algorithm can find a superior bidding strategy for all the market participants with increased profit gains.In addition,the comparison with a conventional-model-based method shows that the MADDPG algorithm has higher computational efficiency,which is feasible for real-world applications.展开更多
Unmanned aerial vehicle (UAV)-based edge computing is an emerging technology that provides fast task processing for a wider area. To address the issues of limited computation resource of a single UAV and finite commun...Unmanned aerial vehicle (UAV)-based edge computing is an emerging technology that provides fast task processing for a wider area. To address the issues of limited computation resource of a single UAV and finite communication resource in multi-UAV networks, this paper joints consideration of task offloading and wireless channel allocation on a collaborative multi-UAV computing network, where a high altitude platform station (HAPS)is adopted as the relay device for communication between UAV clusters consisting of UAV cluster heads (ch-UAVs) and mission UAVs (m-UAVs). We propose an algorithm, jointing task offloading and wireless channel allocation to maximize the average service success rate (ASSR)of a period time. In particular,the simulated annealing(SA)algorithm with random perturbations is used for optimal channel allocation,aiming to reduce interference and minimize transmission delay.A multi-agent deep deterministic policy gradient (MADDPG) is proposed to get the best task offloading strategy. Simulation results demonstrate the effectiveness of the SA algorithm in channel allocation. Meanwhile,when jointly considering computation and channel resources,the proposed scheme effectively enhances the ASSR in comparison to other benchmark algorithms.展开更多
The integration of distributed generations(DG),such as wind turbines and photovoltaics,has a significant impact on the security,stability,and economy of the distribution network due to the randomness and fluctuations ...The integration of distributed generations(DG),such as wind turbines and photovoltaics,has a significant impact on the security,stability,and economy of the distribution network due to the randomness and fluctuations of DG output.Dynamic distribution network reconfiguration(DNR)technology has the potential to mitigate this problem effectively.However,due to the non-convex and nonlinear characteristics of the DNR model,traditional mathematical optimization algorithms face speed challenges,and heuristic algorithms struggle with both speed and accuracy.These problems hinder the effective control of existing distribution networks.To address these challenges,an active distribution network dynamic reconfiguration approach based on an improved multi-agent deep deterministic policy gradient(MADDPG)is proposed.Firstly,taking into account the uncertainties of load and DG,a dynamic DNR stochastic mathematical model is constructed.Next,the concept of fundamental loops(FLs)is defined and the coding method based on loop-coding is adopted for MADDPG action space.Then,the agents with actor and critic networks are equipped in each FL to real-time control network topology.Subsequently,a MADDPG framework for dynamic DNR is constructed.Finally,simulations are conducted on an improved IEEE 33-bus power system to validate the superiority of MADDPG.The results demonstrate that MADDPG has a shorter calculation time than the heuristic algorithm and mathematical optimization algorithm,which is useful for real-time control of DNR.展开更多
基金supported by The National Key R&D Program of China(2020YFB0905900):Research on artificial intelligence application of power internet of things.
文摘The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.
文摘针对多无人战车陆上突防作战时如何根据实时态势进行协同智能决策这一问题,结合多智能体无人战车突防作战过程建立马尔可夫(MDP)模型,并基于多智能体深度确定性策略梯度算法(Multi-agent Deep Deterministic Policy Gradient,MADDPG)提出多无人战车协同突防决策方法。针对多智能体决策时智能体策略变化互相影响的问题,通过在算法的AC结构中引入自注意力机制,使每个智能体进行决策和策略评估时更加关注那些对其影响较大的智能体;并采用自注意力机制计算每个智能体的回报权值,按照每个智能体自身贡献进行回报分配,提升了战车间的协同性;最后通过在想定环境中进行实验,验证了多战车协同突防决策方法的有效性。
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported by the Gansu Province Key Research and Development Plan(No.23YFGA0062)Gansu Provin-cial Innovation Fund(No.2022A-215).
文摘With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for communication and computation to build air-ground integrated networks(AGINs)offers a promising solution for seamless network coverage of remote internet of things(IoT)devices in the future.To address the performance demands of future mobile devices(MDs),we proposed an MEC-assisted AGIN system.The goal is to minimize the long-term computational overhead of MDs by jointly optimizing transmission power,flight trajecto-ries,resource allocation,and offloading ratios,while utilizing non-orthogonal multiple access(NOMA)to improve device connectivity of large-scale MDs and spectral efficiency.We first designed an adaptive clustering scheme based on K-Means to cluster MDs and established commu-nication links,improving efficiency and load balancing.Then,considering system dynamics,we introduced a partial computation offloading algorithm based on multi-agent deep deterministic pol-icy gradient(MADDPG),modeling the multi-UAV computation offloading problem as a Markov decision process(MDP).This algorithm optimizes resource allocation through centralized training and distributed execution,reducing computational overhead.Simulation results show that the pro-posed algorithm not only converges stably but also outperforms other benchmark algorithms in han-dling complex scenarios with multiple devices.
基金This work was supported in part by the US Department of Energy(DOE),Office of Electricity and Office of Energy Efficiency and Renewable Energy under contract DE-AC05-00OR22725in part by CURENT,an Engineering Research Center funded by US National Science Foundation(NSF)and DOE under NSF award EEC-1041877in part by NSF award ECCS-1809458.
文摘In this paper,a day-ahead electricity market bidding problem with multiple strategic generation company(GEN-CO)bidders is studied.The problem is formulated as a Markov game model,where GENCO bidders interact with each other to develop their optimal day-ahead bidding strategies.Considering unobservable information in the problem,a model-free and data-driven approach,known as multi-agent deep deterministic policy gradient(MADDPG),is applied for approximating the Nash equilibrium(NE)in the above Markov game.The MAD-DPG algorithm has the advantage of generalization due to the automatic feature extraction ability of the deep neural networks.The algorithm is tested on an IEEE 30-bus system with three competitive GENCO bidders in both an uncongested case and a congested case.Comparisons with a truthful bidding strategy and state-of-the-art deep reinforcement learning methods including deep Q network and deep deterministic policy gradient(DDPG)demonstrate that the applied MADDPG algorithm can find a superior bidding strategy for all the market participants with increased profit gains.In addition,the comparison with a conventional-model-based method shows that the MADDPG algorithm has higher computational efficiency,which is feasible for real-world applications.
基金supported in part by the National Natural Science Foundation of China under Grants 62341104,62201085,62325108,and 62341131.
文摘Unmanned aerial vehicle (UAV)-based edge computing is an emerging technology that provides fast task processing for a wider area. To address the issues of limited computation resource of a single UAV and finite communication resource in multi-UAV networks, this paper joints consideration of task offloading and wireless channel allocation on a collaborative multi-UAV computing network, where a high altitude platform station (HAPS)is adopted as the relay device for communication between UAV clusters consisting of UAV cluster heads (ch-UAVs) and mission UAVs (m-UAVs). We propose an algorithm, jointing task offloading and wireless channel allocation to maximize the average service success rate (ASSR)of a period time. In particular,the simulated annealing(SA)algorithm with random perturbations is used for optimal channel allocation,aiming to reduce interference and minimize transmission delay.A multi-agent deep deterministic policy gradient (MADDPG) is proposed to get the best task offloading strategy. Simulation results demonstrate the effectiveness of the SA algorithm in channel allocation. Meanwhile,when jointly considering computation and channel resources,the proposed scheme effectively enhances the ASSR in comparison to other benchmark algorithms.
基金supported by the Natural Science Foundation of Fujian Province(No.2022J0512 and No.2021J05134)the National Natural Science Foundation of China(No.52377087).
文摘The integration of distributed generations(DG),such as wind turbines and photovoltaics,has a significant impact on the security,stability,and economy of the distribution network due to the randomness and fluctuations of DG output.Dynamic distribution network reconfiguration(DNR)technology has the potential to mitigate this problem effectively.However,due to the non-convex and nonlinear characteristics of the DNR model,traditional mathematical optimization algorithms face speed challenges,and heuristic algorithms struggle with both speed and accuracy.These problems hinder the effective control of existing distribution networks.To address these challenges,an active distribution network dynamic reconfiguration approach based on an improved multi-agent deep deterministic policy gradient(MADDPG)is proposed.Firstly,taking into account the uncertainties of load and DG,a dynamic DNR stochastic mathematical model is constructed.Next,the concept of fundamental loops(FLs)is defined and the coding method based on loop-coding is adopted for MADDPG action space.Then,the agents with actor and critic networks are equipped in each FL to real-time control network topology.Subsequently,a MADDPG framework for dynamic DNR is constructed.Finally,simulations are conducted on an improved IEEE 33-bus power system to validate the superiority of MADDPG.The results demonstrate that MADDPG has a shorter calculation time than the heuristic algorithm and mathematical optimization algorithm,which is useful for real-time control of DNR.