The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address th...The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.展开更多
Autonomous driving has witnessed rapid advancement;however,ensuring safe and efficient driving in intricate scenarios remains a critical challenge.In particular,traffic roundabouts bring a set of challenges to autonom...Autonomous driving has witnessed rapid advancement;however,ensuring safe and efficient driving in intricate scenarios remains a critical challenge.In particular,traffic roundabouts bring a set of challenges to autonomous driving due to the unpredictable entry and exit of vehicles,susceptibility to traffic flow bottlenecks,and imperfect data in perceiving environmental information,rendering them a vital issue in the practical application of autonomous driving.To address the traffic challenges,this work focused on complex roundabouts with multi-lane and proposed a Perception EnhancedDeepDeterministic Policy Gradient(PE-DDPG)for AutonomousDriving in the Roundabouts.Specifically,themodel incorporates an enhanced variational autoencoder featuring an integrated spatial attention mechanism alongside the Deep Deterministic Policy Gradient framework,enhancing the vehicle’s capability to comprehend complex roundabout environments and make decisions.Furthermore,the PE-DDPG model combines a dynamic path optimization strategy for roundabout scenarios,effectively mitigating traffic bottlenecks and augmenting throughput efficiency.Extensive experiments were conducted with the collaborative simulation platform of CARLA and SUMO,and the experimental results show that the proposed PE-DDPG outperforms the baseline methods in terms of the convergence capacity of the training process,the smoothness of driving and the traffic efficiency with diverse traffic flow patterns and penetration rates of autonomous vehicles(AVs).Generally,the proposed PE-DDPGmodel could be employed for autonomous driving in complex scenarios with imperfect data.展开更多
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ...Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO.展开更多
High penetration of renewable energy sources(RESs)induces sharply-fluctuating feeder power,leading to volt-age deviation in active distribution systems.To prevent voltage violations,multi-terminal soft open points(M-s...High penetration of renewable energy sources(RESs)induces sharply-fluctuating feeder power,leading to volt-age deviation in active distribution systems.To prevent voltage violations,multi-terminal soft open points(M-sOPs)have been integrated into the distribution systems to enhance voltage con-trol flexibility.However,the M-SOP voltage control recalculated in real time cannot adapt to the rapid fluctuations of photovol-taic(PV)power,fundamentally limiting the voltage controllabili-ty of M-SOPs.To address this issue,a full-model-free adaptive graph deep deterministic policy gradient(FAG-DDPG)model is proposed for M-SOP voltage control.Specifically,the attention-based adaptive graph convolutional network(AGCN)is lever-aged to extract the complex correlation features of nodal infor-mation to improve the policy learning ability.Then,the AGCN-based surrogate model is trained to replace the power flow cal-culation to achieve model-free control.Furthermore,the deep deterministic policy gradient(DDPG)algorithm allows FAG-DDPG model to learn an optimal control strategy of M-SOP by continuous interactions with the AGCN-based surrogate model.Numerical tests have been performed on modified IEEE 33-node,123-node,and a real 76-node distribution systems,which demonstrate the effectiveness and generalization ability of the proposed FAG-DDPGmodel.展开更多
In the mid-to-late stages of gas reservoir development,liquid loading in gas wells becomes a common challenge.Plunger lift,as an intermittent production technique,is widely used for deliquification in gas wells.With t...In the mid-to-late stages of gas reservoir development,liquid loading in gas wells becomes a common challenge.Plunger lift,as an intermittent production technique,is widely used for deliquification in gas wells.With the advancement of big data and artificial intelligence,the future of oil and gas field development is trending towards intelligent,unmanned,and automated operations.Currently,the optimization of plunger lift working systems is primarily based on expert experience and manual control,focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production.Additionally,liquid loading in gas wells is a dynamic process,and the intermittent nature of plunger lift requires accurate modeling;using constant inflow dynamics to describe reservoir flow introduces significant errors.To address these challenges,this study establishes a coupled wellbore-reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements.Building on this model,a novel optimization control algorithm based on the deep deterministic policy gradient(DDPG)framework is proposed.The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure,stabilize gas-water ratios,and maximize gas production.Through simulation experiments in three different production optimization scenarios,the effectiveness of reinforcement learning algorithms(including RL,PPO,DQN,and the proposed DDPG)and traditional optimization algorithms(including GA,PSO,and Bayesian optimization)in enhancing production efficiency is compared.The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems.The proposed DDPG algorithm achieves the highest reward value during training with minimal error,leading to a potential increase in cumulative gas production by up to 5%and cumulative liquid production by 252%.The DDPG algorithm exhibits robustness across different optimization scenarios,showcasing excellent adaptability and generalization capabilities.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion...This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for ...With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for communication and computation to build air-ground integrated networks(AGINs)offers a promising solution for seamless network coverage of remote internet of things(IoT)devices in the future.To address the performance demands of future mobile devices(MDs),we proposed an MEC-assisted AGIN system.The goal is to minimize the long-term computational overhead of MDs by jointly optimizing transmission power,flight trajecto-ries,resource allocation,and offloading ratios,while utilizing non-orthogonal multiple access(NOMA)to improve device connectivity of large-scale MDs and spectral efficiency.We first designed an adaptive clustering scheme based on K-Means to cluster MDs and established commu-nication links,improving efficiency and load balancing.Then,considering system dynamics,we introduced a partial computation offloading algorithm based on multi-agent deep deterministic pol-icy gradient(MADDPG),modeling the multi-UAV computation offloading problem as a Markov decision process(MDP).This algorithm optimizes resource allocation through centralized training and distributed execution,reducing computational overhead.Simulation results show that the pro-posed algorithm not only converges stably but also outperforms other benchmark algorithms in han-dling complex scenarios with multiple devices.展开更多
The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has bec...The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.展开更多
The path planning of Unmanned Aerial Vehicle(UAV)is a critical issue in emergency communication and rescue operations,especially in adversarial urban environments.Due to the continuity of the flying space,complex buil...The path planning of Unmanned Aerial Vehicle(UAV)is a critical issue in emergency communication and rescue operations,especially in adversarial urban environments.Due to the continuity of the flying space,complex building obstacles,and the aircraft's high dynamics,traditional algorithms cannot find the optimal collision-free flying path between the UAV station and the destination.Accordingly,in this paper,we study the fast UAV path planning problem in a 3D urban environment from a source point to a target point and propose a Three-Step Experience Buffer Deep Deterministic Policy Gradient(TSEB-DDPG)algorithm.We first build the 3D model of a complex urban environment with buildings and project the 3D building surface into many 2D geometric shapes.After transformation,we propose the Hierarchical Learning Particle Swarm Optimization(HL-PSO)to obtain the empirical path.Then,to ensure the accuracy of the obtained paths,the empirical path,the collision information and fast transition information are stored in the three experience buffers of the TSEB-DDPG algorithm as dynamic guidance information.The sampling ratio of each buffer is dynamically adapted to the training stages.Moreover,we designed a reward mechanism to improve the convergence speed of the DDPG algorithm for UAV path planning.The proposed TSEB-DDPG algorithm has also been compared to three widely used competitors experimentally,and the results show that the TSEB-DDPG algorithm can archive the fastest convergence speed and the highest accuracy.We also conduct experiments in real scenarios and compare the real path planning obtained by the HL-PSO algorithm,DDPG algorithm,and TSEB-DDPG algorithm.The results show that the TSEBDDPG algorithm can archive almost the best in terms of accuracy,the average time of actual path planning,and the success rate.展开更多
The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for cont...The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in flight.Various strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems.This paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints.To address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone.The results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.展开更多
Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different ...Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different energy sources is a critical component of PHEV control technology,directly impacting overall vehicle performance.This study proposes an improved deep reinforcement learning(DRL)-based EMSthat optimizes realtime energy allocation and coordinates the operation of multiple power sources.Conventional DRL algorithms struggle to effectively explore all possible state-action combinations within high-dimensional state and action spaces.They often fail to strike an optimal balance between exploration and exploitation,and their assumption of a static environment limits their ability to adapt to changing conditions.Moreover,these algorithms suffer from low sample efficiency.Collectively,these factors contribute to convergence difficulties,low learning efficiency,and instability.To address these challenges,the Deep Deterministic Policy Gradient(DDPG)algorithm is enhanced using entropy regularization and a summation tree-based Prioritized Experience Replay(PER)method,aiming to improve exploration performance and learning efficiency from experience samples.Additionally,the correspondingMarkovDecision Process(MDP)is established.Finally,an EMSbased on the improvedDRLmodel is presented.Comparative simulation experiments are conducted against rule-based,optimization-based,andDRL-based EMSs.The proposed strategy exhibitsminimal deviation fromthe optimal solution obtained by the dynamic programming(DP)strategy that requires global information.In the typical driving scenarios based onWorld Light Vehicle Test Cycle(WLTC)and New European Driving Cycle(NEDC),the proposed method achieved a fuel consumption of 2698.65 g and an Equivalent Fuel Consumption(EFC)of 2696.77 g.Compared to the DP strategy baseline,the proposed method improved the fuel efficiency variances(FEV)by 18.13%,15.1%,and 8.37%over the Deep QNetwork(DQN),Double DRL(DDRL),and original DDPG methods,respectively.The observational outcomes demonstrate that the proposed EMS based on improved DRL framework possesses good real-time performance,stability,and reliability,effectively optimizing vehicle economy and fuel consumption.展开更多
In consideration of the field-of-view(FOV)angle con-straint,this study focuses on the guidance problem with impact time control.A deep reinforcement learning guidance method is given for the missile to obtain the desi...In consideration of the field-of-view(FOV)angle con-straint,this study focuses on the guidance problem with impact time control.A deep reinforcement learning guidance method is given for the missile to obtain the desired impact time and meet the demand of FOV angle constraint.On basis of the framework of the proportional navigation guidance,an auxiliary control term is supplemented by the distributed deep deterministic policy gradient algorithm,in which the reward functions are developed to decrease the time-to-go error and improve the terminal guid-ance accuracy.The numerical simulation demonstrates that the missile governed by the presented deep reinforcement learning guidance law can hit the target successfully at appointed arrival time.展开更多
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ...Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.展开更多
The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies wh...The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network.展开更多
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co...The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.展开更多
In this paper,a missile terminal guidance law based on a new Deep Deterministic Policy Gradient(DDPG)algorithm is proposed to intercept a maneuvering target equipped with an infrared decoy.First,to deal with the issue...In this paper,a missile terminal guidance law based on a new Deep Deterministic Policy Gradient(DDPG)algorithm is proposed to intercept a maneuvering target equipped with an infrared decoy.First,to deal with the issue that the missile cannot accurately distinguish the target from the decoy,the energy center method is employed to obtain the equivalent energy center(called virtual target)of the target and decoy,and the model for the missile and the virtual decoy is established.Then,an improved DDPG algorithm is proposed based on a trusted-search strategy,which significantly increases the train efficiency of the previous DDPG algorithm.Furthermore,combining the established model,the network obtained by the improved DDPG algorithm and the reward function,an intelligent missile terminal guidance scheme is proposed.Specifically,a heuristic reward function is designed for training and learning in combat scenarios.Finally,the effectiveness and robustness of the proposed guidance law are verified by Monte Carlo tests,and the simulation results obtained by the proposed scheme and other methods are compared to further demonstrate its superior performance.展开更多
Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harm...Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harmful attacks. Routing randomization is a relevant research direction for moving target defense, which has been proven to be an effective method to resist eavesdropping attacks. To counter eavesdropping attacks, in this study, we analyzed the existing routing randomization methods and found that their security and usability need to be further improved. According to the characteristics of eavesdropping attacks, which are “latent and transferable”, a routing randomization defense method based on deep reinforcement learning is proposed. The proposed method realizes routing randomization on packet-level granularity using programmable switches. To improve the security and quality of service of legitimate services in networks, we use the deep deterministic policy gradient to generate random routing schemes with support from powerful network state awareness. In-band network telemetry provides real-time, accurate, and comprehensive network state awareness for the proposed method. Various experiments show that compared with other typical routing randomization defense methods, the proposed method has obvious advantages in security and usability against eavesdropping attacks.展开更多
基金funded in part by the Humanities and Social Sciences Planning Foundation of Ministry of Education of China under Grant No.24YJAZH123National Undergraduate Innovation and Entrepreneurship Training Program of China under Grant No.202510347069the Huzhou Science and Technology Planning Foundation under Grant No.2023GZ04.
文摘The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios.
基金supported in part by the projects of the National Natural Science Foundation of China(62376059,41971340)Fujian Provincial Department of Science and Technology(2023XQ008,2023I0024,2021Y4019),Fujian Provincial Department of Finance(GY-Z230007,GYZ23012)Fujian Key Laboratory of Automotive Electronics and Electric Drive(KF-19-22001).
文摘Autonomous driving has witnessed rapid advancement;however,ensuring safe and efficient driving in intricate scenarios remains a critical challenge.In particular,traffic roundabouts bring a set of challenges to autonomous driving due to the unpredictable entry and exit of vehicles,susceptibility to traffic flow bottlenecks,and imperfect data in perceiving environmental information,rendering them a vital issue in the practical application of autonomous driving.To address the traffic challenges,this work focused on complex roundabouts with multi-lane and proposed a Perception EnhancedDeepDeterministic Policy Gradient(PE-DDPG)for AutonomousDriving in the Roundabouts.Specifically,themodel incorporates an enhanced variational autoencoder featuring an integrated spatial attention mechanism alongside the Deep Deterministic Policy Gradient framework,enhancing the vehicle’s capability to comprehend complex roundabout environments and make decisions.Furthermore,the PE-DDPG model combines a dynamic path optimization strategy for roundabout scenarios,effectively mitigating traffic bottlenecks and augmenting throughput efficiency.Extensive experiments were conducted with the collaborative simulation platform of CARLA and SUMO,and the experimental results show that the proposed PE-DDPG outperforms the baseline methods in terms of the convergence capacity of the training process,the smoothness of driving and the traffic efficiency with diverse traffic flow patterns and penetration rates of autonomous vehicles(AVs).Generally,the proposed PE-DDPGmodel could be employed for autonomous driving in complex scenarios with imperfect data.
文摘Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO.
基金This work was supported by the National Natural Science Foundation of China(No.72331008)GuangdongNaturalScienceFoundation(No.2023A1515010653)+1 种基金Environment and Conservation fund(No.ECF 49/2022)PolyU research project 1-YXBL and CDAH.
文摘High penetration of renewable energy sources(RESs)induces sharply-fluctuating feeder power,leading to volt-age deviation in active distribution systems.To prevent voltage violations,multi-terminal soft open points(M-sOPs)have been integrated into the distribution systems to enhance voltage con-trol flexibility.However,the M-SOP voltage control recalculated in real time cannot adapt to the rapid fluctuations of photovol-taic(PV)power,fundamentally limiting the voltage controllabili-ty of M-SOPs.To address this issue,a full-model-free adaptive graph deep deterministic policy gradient(FAG-DDPG)model is proposed for M-SOP voltage control.Specifically,the attention-based adaptive graph convolutional network(AGCN)is lever-aged to extract the complex correlation features of nodal infor-mation to improve the policy learning ability.Then,the AGCN-based surrogate model is trained to replace the power flow cal-culation to achieve model-free control.Furthermore,the deep deterministic policy gradient(DDPG)algorithm allows FAG-DDPG model to learn an optimal control strategy of M-SOP by continuous interactions with the AGCN-based surrogate model.Numerical tests have been performed on modified IEEE 33-node,123-node,and a real 76-node distribution systems,which demonstrate the effectiveness and generalization ability of the proposed FAG-DDPGmodel.
基金support from Science Foundation of China University of Petroleum,Beijing(No.2462023YJRC019)National Natural Science Foundation of China(No.52204059)Key Core Technology Research Project Foundation of PetroChina Group(No.2023ZG18).
文摘In the mid-to-late stages of gas reservoir development,liquid loading in gas wells becomes a common challenge.Plunger lift,as an intermittent production technique,is widely used for deliquification in gas wells.With the advancement of big data and artificial intelligence,the future of oil and gas field development is trending towards intelligent,unmanned,and automated operations.Currently,the optimization of plunger lift working systems is primarily based on expert experience and manual control,focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production.Additionally,liquid loading in gas wells is a dynamic process,and the intermittent nature of plunger lift requires accurate modeling;using constant inflow dynamics to describe reservoir flow introduces significant errors.To address these challenges,this study establishes a coupled wellbore-reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements.Building on this model,a novel optimization control algorithm based on the deep deterministic policy gradient(DDPG)framework is proposed.The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure,stabilize gas-water ratios,and maximize gas production.Through simulation experiments in three different production optimization scenarios,the effectiveness of reinforcement learning algorithms(including RL,PPO,DQN,and the proposed DDPG)and traditional optimization algorithms(including GA,PSO,and Bayesian optimization)in enhancing production efficiency is compared.The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems.The proposed DDPG algorithm achieves the highest reward value during training with minimal error,leading to a potential increase in cumulative gas production by up to 5%and cumulative liquid production by 252%.The DDPG algorithm exhibits robustness across different optimization scenarios,showcasing excellent adaptability and generalization capabilities.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported by the National Science and Technology Council,Taiwan[Grant NSTC 111-2628-E-006-005-MY3]supported by the Ocean Affairs Council,Taiwansponsored in part by Higher Education Sprout Project,Ministry of Education to the Headquarters of University Advancement at National Cheng Kung University(NCKU).
文摘This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported by the Gansu Province Key Research and Development Plan(No.23YFGA0062)Gansu Provin-cial Innovation Fund(No.2022A-215).
文摘With the rapid growth of connected devices,traditional edge-cloud systems are under overload pressure.Using mobile edge computing(MEC)to assist unmanned aerial vehicles(UAVs)as low altitude platform stations(LAPS)for communication and computation to build air-ground integrated networks(AGINs)offers a promising solution for seamless network coverage of remote internet of things(IoT)devices in the future.To address the performance demands of future mobile devices(MDs),we proposed an MEC-assisted AGIN system.The goal is to minimize the long-term computational overhead of MDs by jointly optimizing transmission power,flight trajecto-ries,resource allocation,and offloading ratios,while utilizing non-orthogonal multiple access(NOMA)to improve device connectivity of large-scale MDs and spectral efficiency.We first designed an adaptive clustering scheme based on K-Means to cluster MDs and established commu-nication links,improving efficiency and load balancing.Then,considering system dynamics,we introduced a partial computation offloading algorithm based on multi-agent deep deterministic pol-icy gradient(MADDPG),modeling the multi-UAV computation offloading problem as a Markov decision process(MDP).This algorithm optimizes resource allocation through centralized training and distributed execution,reducing computational overhead.Simulation results show that the pro-posed algorithm not only converges stably but also outperforms other benchmark algorithms in han-dling complex scenarios with multiple devices.
文摘The low Earth orbit(LEO)satellite networks have outstanding advantages such as wide coverage area and not being limited by geographic environment,which can provide a broader range of communication services and has become an essential supplement to the terrestrial network.However,the dynamic changes and uneven distribution of satellite network traffic inevitably bring challenges to multipath routing.Even worse,the harsh space environment often leads to incomplete collection of network state data for routing decision-making,which further complicates this challenge.To address this problem,this paper proposes a state-incomplete intelligent dynamic multipath routing algorithm(SIDMRA)to maximize network efficiency even with incomplete state data as input.Specifically,we model the multipath routing problem as a markov decision process(MDP)and then combine the deep deterministic policy gradient(DDPG)and the K shortest paths(KSP)algorithm to solve the optimal multipath routing policy.We use the temporal correlation of the satellite network state to fit the incomplete state data and then use the message passing neuron network(MPNN)for data enhancement.Simulation results show that the proposed algorithm outperforms baseline algorithms regarding average end-to-end delay and packet loss rate and performs stably under certain missing rates of state data.
基金supported in part by the Hubei Provincial Science and Technology Major Project of China(Grant No.2020AEA011)in part by the National Ethnic Affairs Commission of the People’s Republic of China(Training Program for Young and Middle-aged Talents)(No:MZR20007)+4 种基金in part by the National Natural Science Foundation of China(Grant No.61902437)in part by the Hubei Provincial Natural Science Foundation of China(Grant No.2020CFB629)in part by the Application Foundation Frontier Project of Wuhan Science and Technology Program(Grant No.2020020601012267)in part by the Fundamental Research Funds for the Central Universities,South-Central MinZu University(No:CZQ21026)in part by the Special Project on Regional Collaborative Innovation of Xinjiang Uygur Autonomous Region(Plan to Aid Xinjiang with Science and Technology)(2022E02035)。
文摘The path planning of Unmanned Aerial Vehicle(UAV)is a critical issue in emergency communication and rescue operations,especially in adversarial urban environments.Due to the continuity of the flying space,complex building obstacles,and the aircraft's high dynamics,traditional algorithms cannot find the optimal collision-free flying path between the UAV station and the destination.Accordingly,in this paper,we study the fast UAV path planning problem in a 3D urban environment from a source point to a target point and propose a Three-Step Experience Buffer Deep Deterministic Policy Gradient(TSEB-DDPG)algorithm.We first build the 3D model of a complex urban environment with buildings and project the 3D building surface into many 2D geometric shapes.After transformation,we propose the Hierarchical Learning Particle Swarm Optimization(HL-PSO)to obtain the empirical path.Then,to ensure the accuracy of the obtained paths,the empirical path,the collision information and fast transition information are stored in the three experience buffers of the TSEB-DDPG algorithm as dynamic guidance information.The sampling ratio of each buffer is dynamically adapted to the training stages.Moreover,we designed a reward mechanism to improve the convergence speed of the DDPG algorithm for UAV path planning.The proposed TSEB-DDPG algorithm has also been compared to three widely used competitors experimentally,and the results show that the TSEB-DDPG algorithm can archive the fastest convergence speed and the highest accuracy.We also conduct experiments in real scenarios and compare the real path planning obtained by the HL-PSO algorithm,DDPG algorithm,and TSEB-DDPG algorithm.The results show that the TSEBDDPG algorithm can archive almost the best in terms of accuracy,the average time of actual path planning,and the success rate.
基金supported by Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R135)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in flight.Various strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems.This paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints.To address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone.The results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment.
文摘Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different energy sources is a critical component of PHEV control technology,directly impacting overall vehicle performance.This study proposes an improved deep reinforcement learning(DRL)-based EMSthat optimizes realtime energy allocation and coordinates the operation of multiple power sources.Conventional DRL algorithms struggle to effectively explore all possible state-action combinations within high-dimensional state and action spaces.They often fail to strike an optimal balance between exploration and exploitation,and their assumption of a static environment limits their ability to adapt to changing conditions.Moreover,these algorithms suffer from low sample efficiency.Collectively,these factors contribute to convergence difficulties,low learning efficiency,and instability.To address these challenges,the Deep Deterministic Policy Gradient(DDPG)algorithm is enhanced using entropy regularization and a summation tree-based Prioritized Experience Replay(PER)method,aiming to improve exploration performance and learning efficiency from experience samples.Additionally,the correspondingMarkovDecision Process(MDP)is established.Finally,an EMSbased on the improvedDRLmodel is presented.Comparative simulation experiments are conducted against rule-based,optimization-based,andDRL-based EMSs.The proposed strategy exhibitsminimal deviation fromthe optimal solution obtained by the dynamic programming(DP)strategy that requires global information.In the typical driving scenarios based onWorld Light Vehicle Test Cycle(WLTC)and New European Driving Cycle(NEDC),the proposed method achieved a fuel consumption of 2698.65 g and an Equivalent Fuel Consumption(EFC)of 2696.77 g.Compared to the DP strategy baseline,the proposed method improved the fuel efficiency variances(FEV)by 18.13%,15.1%,and 8.37%over the Deep QNetwork(DQN),Double DRL(DDRL),and original DDPG methods,respectively.The observational outcomes demonstrate that the proposed EMS based on improved DRL framework possesses good real-time performance,stability,and reliability,effectively optimizing vehicle economy and fuel consumption.
基金supported by the National Natural Science Foundation of China(62003021,62373304)Industry-University-Research Innovation Fund for Chinese Universities(2021ZYA02009)+2 种基金Shaanxi Qinchuangyuan High-level Innovation and Entrepreneurship Talent Project(OCYRCXM-2022-136)Shaanxi Association for Science and Technology Youth Talent Support Program(XXJS202218)the Fundamental Research Funds for the Central Universities(D5000210830).
文摘In consideration of the field-of-view(FOV)angle con-straint,this study focuses on the guidance problem with impact time control.A deep reinforcement learning guidance method is given for the missile to obtain the desired impact time and meet the demand of FOV angle constraint.On basis of the framework of the proportional navigation guidance,an auxiliary control term is supplemented by the distributed deep deterministic policy gradient algorithm,in which the reward functions are developed to decrease the time-to-go error and improve the terminal guid-ance accuracy.The numerical simulation demonstrates that the missile governed by the presented deep reinforcement learning guidance law can hit the target successfully at appointed arrival time.
基金co-supported by the National Natural Science Foundation of China(Nos.62003267,61573285)the Aeronautical Science Foundation of China(ASFC)(No.20175553027)Natural Science Basic Research Plan in Shaanxi Province of China(No.2020JQ-220)。
文摘Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions.
基金supported by the Key Laboratory of Defense Science and Technology Foundation of Luoyang Electro-optical Equipment Research Institute(6142504200108)。
文摘The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network.
基金supported by The National Key R&D Program of China(2020YFB0905900):Research on artificial intelligence application of power internet of things.
文摘The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.
基金supported by the National Natural Science Foundation of China(Nos.61973253 and 62006192)。
文摘In this paper,a missile terminal guidance law based on a new Deep Deterministic Policy Gradient(DDPG)algorithm is proposed to intercept a maneuvering target equipped with an infrared decoy.First,to deal with the issue that the missile cannot accurately distinguish the target from the decoy,the energy center method is employed to obtain the equivalent energy center(called virtual target)of the target and decoy,and the model for the missile and the virtual decoy is established.Then,an improved DDPG algorithm is proposed based on a trusted-search strategy,which significantly increases the train efficiency of the previous DDPG algorithm.Furthermore,combining the established model,the network obtained by the improved DDPG algorithm and the reward function,an intelligent missile terminal guidance scheme is proposed.Specifically,a heuristic reward function is designed for training and learning in combat scenarios.Finally,the effectiveness and robustness of the proposed guidance law are verified by Monte Carlo tests,and the simulation results obtained by the proposed scheme and other methods are compared to further demonstrate its superior performance.
文摘Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harmful attacks. Routing randomization is a relevant research direction for moving target defense, which has been proven to be an effective method to resist eavesdropping attacks. To counter eavesdropping attacks, in this study, we analyzed the existing routing randomization methods and found that their security and usability need to be further improved. According to the characteristics of eavesdropping attacks, which are “latent and transferable”, a routing randomization defense method based on deep reinforcement learning is proposed. The proposed method realizes routing randomization on packet-level granularity using programmable switches. To improve the security and quality of service of legitimate services in networks, we use the deep deterministic policy gradient to generate random routing schemes with support from powerful network state awareness. In-band network telemetry provides real-time, accurate, and comprehensive network state awareness for the proposed method. Various experiments show that compared with other typical routing randomization defense methods, the proposed method has obvious advantages in security and usability against eavesdropping attacks.