期刊文献+
共找到467篇文章
< 1 2 24 >
每页显示 20 50 100
A Dynamic Deceptive Defense Framework for Zero-Day Attacks in IIoT:Integrating Stackelberg Game and Multi-Agent Distributed Deep Deterministic Policy Gradient
1
作者 Shigen Shen Xiaojun Ji Yimeng Liu 《Computers, Materials & Continua》 2025年第11期3997-4021,共25页
The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address th... The Industrial Internet of Things(IIoT)is increasingly vulnerable to sophisticated cyber threats,particularly zero-day attacks that exploit unknown vulnerabilities and evade traditional security measures.To address this critical challenge,this paper proposes a dynamic defense framework named Zero-day-aware Stackelberg Game-based Multi-Agent Distributed Deep Deterministic Policy Gradient(ZSG-MAD3PG).The framework integrates Stackelberg game modeling with the Multi-Agent Distributed Deep Deterministic Policy Gradient(MAD3PG)algorithm and incorporates defensive deception(DD)strategies to achieve adaptive and efficient protection.While conventional methods typically incur considerable resource overhead and exhibit higher latency due to static or rigid defensive mechanisms,the proposed ZSG-MAD3PG framework mitigates these limitations through multi-stage game modeling and adaptive learning,enabling more efficient resource utilization and faster response times.The Stackelberg-based architecture allows defenders to dynamically optimize packet sampling strategies,while attackers adjust their tactics to reach rapid equilibrium.Furthermore,dynamic deception techniques reduce the time required for the concealment of attacks and the overall system burden.A lightweight behavioral fingerprinting detection mechanism further enhances real-time zero-day attack identification within industrial device clusters.ZSG-MAD3PG demonstrates higher true positive rates(TPR)and lower false alarm rates(FAR)compared to existing methods,while also achieving improved latency,resource efficiency,and stealth adaptability in IIoT zero-day defense scenarios. 展开更多
关键词 Industrial internet of things zero-day attacks Stackelberg game distributed deep deterministic policy gradient defensive spoofing dynamic defense
在线阅读 下载PDF
Perception Enhanced Deep Deterministic Policy Gradient for Autonomous Driving in Complex Scenarios
2
作者 Lyuchao Liao Hankun Xiao +3 位作者 Pengqi Xing Zhenhua Gan Youpeng He Jiajun Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期557-576,共20页
Autonomous driving has witnessed rapid advancement;however,ensuring safe and efficient driving in intricate scenarios remains a critical challenge.In particular,traffic roundabouts bring a set of challenges to autonom... Autonomous driving has witnessed rapid advancement;however,ensuring safe and efficient driving in intricate scenarios remains a critical challenge.In particular,traffic roundabouts bring a set of challenges to autonomous driving due to the unpredictable entry and exit of vehicles,susceptibility to traffic flow bottlenecks,and imperfect data in perceiving environmental information,rendering them a vital issue in the practical application of autonomous driving.To address the traffic challenges,this work focused on complex roundabouts with multi-lane and proposed a Perception EnhancedDeepDeterministic Policy Gradient(PE-DDPG)for AutonomousDriving in the Roundabouts.Specifically,themodel incorporates an enhanced variational autoencoder featuring an integrated spatial attention mechanism alongside the Deep Deterministic Policy Gradient framework,enhancing the vehicle’s capability to comprehend complex roundabout environments and make decisions.Furthermore,the PE-DDPG model combines a dynamic path optimization strategy for roundabout scenarios,effectively mitigating traffic bottlenecks and augmenting throughput efficiency.Extensive experiments were conducted with the collaborative simulation platform of CARLA and SUMO,and the experimental results show that the proposed PE-DDPG outperforms the baseline methods in terms of the convergence capacity of the training process,the smoothness of driving and the traffic efficiency with diverse traffic flow patterns and penetration rates of autonomous vehicles(AVs).Generally,the proposed PE-DDPGmodel could be employed for autonomous driving in complex scenarios with imperfect data. 展开更多
关键词 Autonomous driving traffic roundabouts deep deterministic policy gradient spatial attention mechanisms
在线阅读 下载PDF
Simultaneous Depth and Heading Control for Autonomous Underwater Vehicle Docking Maneuvers Using Deep Reinforcement Learning within a Digital Twin System
3
作者 Yu-Hsien Lin Po-Cheng Chuang Joyce Yi-Tzu Huang 《Computers, Materials & Continua》 2025年第9期4907-4948,共42页
This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion... This study proposes an automatic control system for Autonomous Underwater Vehicle(AUV)docking,utilizing a digital twin(DT)environment based on the HoloOcean platform,which integrates six-degree-of-freedom(6-DOF)motion equations and hydrodynamic coefficients to create a realistic simulation.Although conventional model-based and visual servoing approaches often struggle in dynamic underwater environments due to limited adaptability and extensive parameter tuning requirements,deep reinforcement learning(DRL)offers a promising alternative.In the positioning stage,the Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm is employed for synchronized depth and heading control,which offers stable training,reduced overestimation bias,and superior handling of continuous control compared to other DRL methods.During the searching stage,zig-zag heading motion combined with a state-of-the-art object detection algorithm facilitates docking station localization.For the docking stage,this study proposes an innovative Image-based DDPG(I-DDPG),enhanced and trained in a Unity-MATLAB simulation environment,to achieve visual target tracking.Furthermore,integrating a DT environment enables efficient and safe policy training,reduces dependence on costly real-world tests,and improves sim-to-real transfer performance.Both simulation and real-world experiments were conducted,demonstrating the effectiveness of the system in improving AUV control strategies and supporting the transition from simulation to real-world operations in underwater environments.The results highlight the scalability and robustness of the proposed system,as evidenced by the TD3 controller achieving 25%less oscillation than the adaptive fuzzy controller when reaching the target depth,thereby demonstrating superior stability,accuracy,and potential for broader and more complex autonomous underwater tasks. 展开更多
关键词 Autonomous underwater vehicle docking maneuver digital twin deep reinforcement learning twin delayed deep deterministic policy gradient
在线阅读 下载PDF
Enhanced Deep Reinforcement Learning Strategy for Energy Management in Plug-in Hybrid Electric Vehicles with Entropy Regularization and Prioritized Experience Replay
4
作者 Li Wang Xiaoyong Wang 《Energy Engineering》 EI 2024年第12期3953-3979,共27页
Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different ... Plug-in Hybrid Electric Vehicles(PHEVs)represent an innovative breed of transportation,harnessing diverse power sources for enhanced performance.Energy management strategies(EMSs)that coordinate and control different energy sources is a critical component of PHEV control technology,directly impacting overall vehicle performance.This study proposes an improved deep reinforcement learning(DRL)-based EMSthat optimizes realtime energy allocation and coordinates the operation of multiple power sources.Conventional DRL algorithms struggle to effectively explore all possible state-action combinations within high-dimensional state and action spaces.They often fail to strike an optimal balance between exploration and exploitation,and their assumption of a static environment limits their ability to adapt to changing conditions.Moreover,these algorithms suffer from low sample efficiency.Collectively,these factors contribute to convergence difficulties,low learning efficiency,and instability.To address these challenges,the Deep Deterministic Policy Gradient(DDPG)algorithm is enhanced using entropy regularization and a summation tree-based Prioritized Experience Replay(PER)method,aiming to improve exploration performance and learning efficiency from experience samples.Additionally,the correspondingMarkovDecision Process(MDP)is established.Finally,an EMSbased on the improvedDRLmodel is presented.Comparative simulation experiments are conducted against rule-based,optimization-based,andDRL-based EMSs.The proposed strategy exhibitsminimal deviation fromthe optimal solution obtained by the dynamic programming(DP)strategy that requires global information.In the typical driving scenarios based onWorld Light Vehicle Test Cycle(WLTC)and New European Driving Cycle(NEDC),the proposed method achieved a fuel consumption of 2698.65 g and an Equivalent Fuel Consumption(EFC)of 2696.77 g.Compared to the DP strategy baseline,the proposed method improved the fuel efficiency variances(FEV)by 18.13%,15.1%,and 8.37%over the Deep QNetwork(DQN),Double DRL(DDRL),and original DDPG methods,respectively.The observational outcomes demonstrate that the proposed EMS based on improved DRL framework possesses good real-time performance,stability,and reliability,effectively optimizing vehicle economy and fuel consumption. 展开更多
关键词 Plug-in hybrid electric vehicles deep reinforcement learning energy management strategy deep deterministic policy gradient entropy regularization prioritized experience replay
在线阅读 下载PDF
Optimizing the Multi-Objective Discrete Particle Swarm Optimization Algorithm by Deep Deterministic Policy Gradient Algorithm
5
作者 Sun Yang-Yang Yao Jun-Ping +2 位作者 Li Xiao-Jun Fan Shou-Xiang Wang Zi-Wei 《Journal on Artificial Intelligence》 2022年第1期27-35,共9页
Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains ... Deep deterministic policy gradient(DDPG)has been proved to be effective in optimizing particle swarm optimization(PSO),but whether DDPG can optimize multi-objective discrete particle swarm optimization(MODPSO)remains to be determined.The present work aims to probe into this topic.Experiments showed that the DDPG can not only quickly improve the convergence speed of MODPSO,but also overcome the problem of local optimal solution that MODPSO may suffer.The research findings are of great significance for the theoretical research and application of MODPSO. 展开更多
关键词 deep deterministic policy gradient multi-objective discrete particle swarm optimization deep reinforcement learning machine learning
在线阅读 下载PDF
Real-Time Implementation of Quadrotor UAV Control System Based on a Deep Reinforcement Learning Approach
6
作者 Taha Yacine Trad Kheireddine Choutri +4 位作者 Mohand Lagha Souham Meshoul Fouad Khenfri Raouf Fareh Hadil Shaiba 《Computers, Materials & Continua》 SCIE EI 2024年第12期4757-4786,共30页
The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for cont... The popularity of quadrotor Unmanned Aerial Vehicles(UAVs)stems from their simple propulsion systems and structural design.However,their complex and nonlinear dynamic behavior presents a significant challenge for control,necessitating sophisticated algorithms to ensure stability and accuracy in flight.Various strategies have been explored by researchers and control engineers,with learning-based methods like reinforcement learning,deep learning,and neural networks showing promise in enhancing the robustness and adaptability of quadrotor control systems.This paper investigates a Reinforcement Learning(RL)approach for both high and low-level quadrotor control systems,focusing on attitude stabilization and position tracking tasks.A novel reward function and actor-critic network structures are designed to stimulate high-order observable states,improving the agent’s understanding of the quadrotor’s dynamics and environmental constraints.To address the challenge of RL hyper-parameter tuning,a new framework is introduced that combines Simulated Annealing(SA)with a reinforcement learning algorithm,specifically Simulated Annealing-Twin Delayed Deep Deterministic Policy Gradient(SA-TD3).This approach is evaluated for path-following and stabilization tasks through comparative assessments with two commonly used control methods:Backstepping and Sliding Mode Control(SMC).While the implementation of the well-trained agents exhibited unexpected behavior during real-world testing,a reduced neural network used for altitude control was successfully implemented on a Parrot Mambo mini drone.The results showcase the potential of the proposed SA-TD3 framework for real-world applications,demonstrating improved stability and precision across various test scenarios and highlighting its feasibility for practical deployment. 展开更多
关键词 deep reinforcement learning hyper-parameters optimization path following QUADROTOR twin delayed deep deterministic policy gradient and simulated annealing
在线阅读 下载PDF
Deep reinforcement learning guidance with impact time control
7
作者 LI Guofei LI Shituo +1 位作者 LI Bohao WU Yunjie 《Journal of Systems Engineering and Electronics》 CSCD 2024年第6期1594-1603,共10页
In consideration of the field-of-view(FOV)angle con-straint,this study focuses on the guidance problem with impact time control.A deep reinforcement learning guidance method is given for the missile to obtain the desi... In consideration of the field-of-view(FOV)angle con-straint,this study focuses on the guidance problem with impact time control.A deep reinforcement learning guidance method is given for the missile to obtain the desired impact time and meet the demand of FOV angle constraint.On basis of the framework of the proportional navigation guidance,an auxiliary control term is supplemented by the distributed deep deterministic policy gradient algorithm,in which the reward functions are developed to decrease the time-to-go error and improve the terminal guid-ance accuracy.The numerical simulation demonstrates that the missile governed by the presented deep reinforcement learning guidance law can hit the target successfully at appointed arrival time. 展开更多
关键词 impact time deep reinforcement learning guidance law field-of-view(FOV)angle deep deterministic policy gradient
在线阅读 下载PDF
Optimization of plunger lift working systems using reinforcement learning for coupled wellbore/reservoir
8
作者 Zhi-Sheng Xing Guo-Qing Han +5 位作者 You-Liang Jia Wei Tian Hang-Fei Gong Wen-Bo Jiang Pei-Dong Mai Xing-Yuan Liang 《Petroleum Science》 2025年第5期2154-2168,共15页
In the mid-to-late stages of gas reservoir development,liquid loading in gas wells becomes a common challenge.Plunger lift,as an intermittent production technique,is widely used for deliquification in gas wells.With t... In the mid-to-late stages of gas reservoir development,liquid loading in gas wells becomes a common challenge.Plunger lift,as an intermittent production technique,is widely used for deliquification in gas wells.With the advancement of big data and artificial intelligence,the future of oil and gas field development is trending towards intelligent,unmanned,and automated operations.Currently,the optimization of plunger lift working systems is primarily based on expert experience and manual control,focusing mainly on the success of the plunger lift without adequately considering the impact of different working systems on gas production.Additionally,liquid loading in gas wells is a dynamic process,and the intermittent nature of plunger lift requires accurate modeling;using constant inflow dynamics to describe reservoir flow introduces significant errors.To address these challenges,this study establishes a coupled wellbore-reservoir model for plunger lift wells and validates the computational wellhead pressure results against field measurements.Building on this model,a novel optimization control algorithm based on the deep deterministic policy gradient(DDPG)framework is proposed.The algorithm aims to optimize plunger lift working systems to balance overall reservoir pressure,stabilize gas-water ratios,and maximize gas production.Through simulation experiments in three different production optimization scenarios,the effectiveness of reinforcement learning algorithms(including RL,PPO,DQN,and the proposed DDPG)and traditional optimization algorithms(including GA,PSO,and Bayesian optimization)in enhancing production efficiency is compared.The results demonstrate that the coupled model provides highly accurate calculations and can precisely describe the transient production of wellbore and gas reservoir systems.The proposed DDPG algorithm achieves the highest reward value during training with minimal error,leading to a potential increase in cumulative gas production by up to 5%and cumulative liquid production by 252%.The DDPG algorithm exhibits robustness across different optimization scenarios,showcasing excellent adaptability and generalization capabilities. 展开更多
关键词 Plunger lift Liquid loading Deliquification Reinforcement learning deep deterministic policy gradient(DDPG) Artificial intelligence
原文传递
Relevant experience learning:A deep reinforcement learning method for UAV autonomous motion planning in complex unknown environments 被引量:21
9
作者 Zijian HU Xiaoguang GAO +2 位作者 Kaifang WAN Yiwei ZHAI Qianglong WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2021年第12期187-204,共18页
Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a ... Unmanned Aerial Vehicles(UAVs)play a vital role in military warfare.In a variety of battlefield mission scenarios,UAVs are required to safely fly to designated locations without human intervention.Therefore,finding a suitable method to solve the UAV Autonomous Motion Planning(AMP)problem can improve the success rate of UAV missions to a certain extent.In recent years,many studies have used Deep Reinforcement Learning(DRL)methods to address the AMP problem and have achieved good results.From the perspective of sampling,this paper designs a sampling method with double-screening,combines it with the Deep Deterministic Policy Gradient(DDPG)algorithm,and proposes the Relevant Experience Learning-DDPG(REL-DDPG)algorithm.The REL-DDPG algorithm uses a Prioritized Experience Replay(PER)mechanism to break the correlation of continuous experiences in the experience pool,finds the experiences most similar to the current state to learn according to the theory in human education,and expands the influence of the learning process on action selection at the current state.All experiments are applied in a complex unknown simulation environment constructed based on the parameters of a real UAV.The training experiments show that REL-DDPG improves the convergence speed and the convergence result compared to the state-of-the-art DDPG algorithm,while the testing experiments show the applicability of the algorithm and investigate the performance under different parameter conditions. 展开更多
关键词 Autonomous Motion Planning(AMP) deep deterministic policy gradient(DDPG) deep Reinforcement learning(DRL) Sampling method UAV
原文传递
Deep reinforcement learning and its application in autonomous fitting optimization for attack areas of UCAVs 被引量:14
10
作者 LI Yue QIU Xiaohui +1 位作者 LIU Xiaodong XIA Qunli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第4期734-742,共9页
The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies wh... The ever-changing battlefield environment requires the use of robust and adaptive technologies integrated into a reliable platform. Unmanned combat aerial vehicles(UCAVs) aim to integrate such advanced technologies while increasing the tactical capabilities of combat aircraft. As a research object, common UCAV uses the neural network fitting strategy to obtain values of attack areas. However, this simple strategy cannot cope with complex environmental changes and autonomously optimize decision-making problems. To solve the problem, this paper proposes a new deep deterministic policy gradient(DDPG) strategy based on deep reinforcement learning for the attack area fitting of UCAVs in the future battlefield. Simulation results show that the autonomy and environmental adaptability of UCAVs in the future battlefield will be improved based on the new DDPG algorithm and the training process converges quickly. We can obtain the optimal values of attack areas in real time during the whole flight with the well-trained deep network. 展开更多
关键词 attack area neural network deep deterministic policy gradient(DDPG) unmanned combat aerial vehicle(UCAV)
在线阅读 下载PDF
Moving target defense of routing randomization with deep reinforcement learning against eavesdropping attack 被引量:5
11
作者 Xiaoyu Xu Hao Hu +3 位作者 Yuling Liu Jinglei Tan Hongqi Zhang Haotian Song 《Digital Communications and Networks》 SCIE CSCD 2022年第3期373-387,共15页
Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harm... Eavesdropping attacks have become one of the most common attacks on networks because of their easy implementation. Eavesdropping attacks not only lead to transmission data leakage but also develop into other more harmful attacks. Routing randomization is a relevant research direction for moving target defense, which has been proven to be an effective method to resist eavesdropping attacks. To counter eavesdropping attacks, in this study, we analyzed the existing routing randomization methods and found that their security and usability need to be further improved. According to the characteristics of eavesdropping attacks, which are “latent and transferable”, a routing randomization defense method based on deep reinforcement learning is proposed. The proposed method realizes routing randomization on packet-level granularity using programmable switches. To improve the security and quality of service of legitimate services in networks, we use the deep deterministic policy gradient to generate random routing schemes with support from powerful network state awareness. In-band network telemetry provides real-time, accurate, and comprehensive network state awareness for the proposed method. Various experiments show that compared with other typical routing randomization defense methods, the proposed method has obvious advantages in security and usability against eavesdropping attacks. 展开更多
关键词 Routing randomization Moving target defense deep reinforcement learning deep deterministic policy gradient
在线阅读 下载PDF
Distributed optimization of electricity-Gas-Heat integrated energy system with multi-agent deep reinforcement learning 被引量:5
12
作者 Lei Dong Jing Wei +1 位作者 Hao Lin Xinying Wang 《Global Energy Interconnection》 EI CAS CSCD 2022年第6期604-617,共14页
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co... The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents. 展开更多
关键词 Integrated energy system Multi-agent system Distributed optimization Multi-agent deep deterministic policy gradient Real-time optimization decision
在线阅读 下载PDF
Full-model-free Adaptive Graph Deep Deterministic Policy Gradient Model for Multi-terminal Soft Open Point Voltage Control in Distribution Systems 被引量:2
13
作者 Huayi Wu Zhao Xu +1 位作者 Minghao Wang Youwei Jia 《Journal of Modern Power Systems and Clean Energy》 CSCD 2024年第6期1893-1904,共12页
High penetration of renewable energy sources(RESs)induces sharply-fluctuating feeder power,leading to volt-age deviation in active distribution systems.To prevent voltage violations,multi-terminal soft open points(M-s... High penetration of renewable energy sources(RESs)induces sharply-fluctuating feeder power,leading to volt-age deviation in active distribution systems.To prevent voltage violations,multi-terminal soft open points(M-sOPs)have been integrated into the distribution systems to enhance voltage con-trol flexibility.However,the M-SOP voltage control recalculated in real time cannot adapt to the rapid fluctuations of photovol-taic(PV)power,fundamentally limiting the voltage controllabili-ty of M-SOPs.To address this issue,a full-model-free adaptive graph deep deterministic policy gradient(FAG-DDPG)model is proposed for M-SOP voltage control.Specifically,the attention-based adaptive graph convolutional network(AGCN)is lever-aged to extract the complex correlation features of nodal infor-mation to improve the policy learning ability.Then,the AGCN-based surrogate model is trained to replace the power flow cal-culation to achieve model-free control.Furthermore,the deep deterministic policy gradient(DDPG)algorithm allows FAG-DDPG model to learn an optimal control strategy of M-SOP by continuous interactions with the AGCN-based surrogate model.Numerical tests have been performed on modified IEEE 33-node,123-node,and a real 76-node distribution systems,which demonstrate the effectiveness and generalization ability of the proposed FAG-DDPGmodel. 展开更多
关键词 Soft open point graph attention graph convolutional network reinforcement learning voltage control distribution system deep deterministic policy gradient
原文传递
RIS-Assisted UAV-D2D Communications Exploiting Deep Reinforcement Learning
14
作者 YOU Qian XU Qian +2 位作者 YANG Xin ZHANG Tao CHEN Ming 《ZTE Communications》 2023年第2期61-69,共9页
Device-to-device(D2D)communications underlying cellular networks enabled by unmanned aerial vehicles(UAV)have been regarded as promising techniques for next-generation communications.To mitigate the strong interferenc... Device-to-device(D2D)communications underlying cellular networks enabled by unmanned aerial vehicles(UAV)have been regarded as promising techniques for next-generation communications.To mitigate the strong interference caused by the line-of-sight(LoS)airto-ground channels,we deploy a reconfigurable intelligent surface(RIS)to rebuild the wireless channels.A joint optimization problem of the transmit power of UAV,the transmit power of D2D users and the RIS phase configuration are investigated to maximize the achievable rate of D2D users while satisfying the quality of service(QoS)requirement of cellular users.Due to the high channel dynamics and the coupling among cellular users,the RIS,and the D2D users,it is challenging to find a proper solution.Thus,a RIS softmax deep double deterministic(RIS-SD3)policy gradient method is proposed,which can smooth the optimization space as well as reduce the number of local optimizations.Specifically,the SD3 algorithm maximizes the reward of the agent by training the agent to maximize the value function after the softmax operator is introduced.Simulation results show that the proposed RIS-SD3 algorithm can significantly improve the rate of the D2D users while controlling the interference to the cellular user.Moreover,the proposed RIS-SD3 algorithm has better robustness than the twin delayed deep deterministic(TD3)policy gradient algorithm in a dynamic environment. 展开更多
关键词 device-to-device communications reconfigurable intelligent surface deep reinforcement learning softmax deep double deterministic policy gradient
在线阅读 下载PDF
基于深度强化学习的车联网动态卸载成本优化
15
作者 赵珊 贾宗璞 +2 位作者 朱小丽 庞晓艳 谷坤源 《河南理工大学学报(自然科学版)》 北大核心 2025年第6期191-200,共10页
目的为解决不完美信道车联网中任务卸载与资源分配的关键问题,降低计算成本,方法结合不完美信道特征对基础的车联网任务卸载环境抽象化,联合优化任务卸载比、功率选择和服务器资源分配,建立所有用户的长期平均成本最小化问题模型。采用... 目的为解决不完美信道车联网中任务卸载与资源分配的关键问题,降低计算成本,方法结合不完美信道特征对基础的车联网任务卸载环境抽象化,联合优化任务卸载比、功率选择和服务器资源分配,建立所有用户的长期平均成本最小化问题模型。采用基于深度强化学习的动态卸载优化方案,同时考虑求解变量的连续性,提出优化的深度确定性策略梯度算法SP-DDPG(deep deterministic policy gradient with importance sampling and prioritized experience replay)求解问题模型。对比现有的一些深度强化学习方法,研究单一变量影响下SP-DDPG算法的运行表现,分别计算平均卸载成本和任务丢弃数2个重要指标。结果所提算法与所设置的完全任务卸载算法F-DDPG与DDQN算法相比,任务平均卸载成本分别降低了约36.13%和44.02%,任务丢弃数至少下降了4.38%和9.76%;与部分卸载算法DDPG相比,任务平均卸载成本和任务丢弃数分别下降13.34%和3.17%。实验结果取多次运行后的平均值(时延及能耗权衡因子ω=0.5,信道估计精度值ρ=0.95),具有较好可靠性。结论在复杂变化的不稳定车联网环境中,所提优化深度确定性策略梯度算法SP-DDPG,相较几种常规的深度强化学习算法任务计算成本更低,任务处理效果更好。 展开更多
关键词 车联网 部分卸载 资源分配 深度确定性策略梯度 不完美信道
在线阅读 下载PDF
基于改进TD3的RIS-无人机通信系统能效优化
16
作者 王翊 邓毓 +3 位作者 许耀华 蒋芳 江福林 胡艳军 《西安电子科技大学学报》 北大核心 2025年第4期226-234,共9页
考虑到可重构智能表面(RIS)辅助的无人机(UAV)通信系统中存在多个移动用户的情况,研究了UAV的飞行能耗对系统能效的影响,通过联合优化UAV轨迹与主动波束赋形以及RIS相移设计以提升系统能效。由于目标函数是非凸的且优化变量耦合,传统算... 考虑到可重构智能表面(RIS)辅助的无人机(UAV)通信系统中存在多个移动用户的情况,研究了UAV的飞行能耗对系统能效的影响,通过联合优化UAV轨迹与主动波束赋形以及RIS相移设计以提升系统能效。由于目标函数是非凸的且优化变量耦合,传统算法难以直接求解,提出一种基于双延迟深度确定性策略梯度(TTD3)的高斯分布双延迟深度确定性策略梯度算法(GD-TD3),通过联合优化UAV轨迹与主动波束赋形以及RIS被动波束赋形以提升系统总数据速率和系统长期能效。所提算法通过改进双智能体框架中的原始网络结构,同时对多个用户移动性建模,分别优化了系统中的UAV轨迹以及UAV与RIS的主/被动波束赋形。仿真结果表明,相较于其他算法,GD-TD3算法在系统能效提升方面表现更好,在收敛速度和收敛稳定性方面都有一定提升。 展开更多
关键词 可重构智能表面 无人机通信 轨迹优化 双延迟深度确定性策略梯度算法
在线阅读 下载PDF
基于DDPG-PID控制算法的机器人高精度运动控制研究 被引量:1
17
作者 赵坤灿 朱荣 《计算机测量与控制》 2025年第7期171-179,共9页
随着工业自动化、物流搬运和医疗辅助等领域对机器人控制精度要求的提高,确保运动控制的精确性成为关键;对四轮机器人高精度运动控制进行了研究,采用立即回报优先机制和时间差误差优先机制优化深度确定性策略梯度算法;并设计了一种含有... 随着工业自动化、物流搬运和医疗辅助等领域对机器人控制精度要求的提高,确保运动控制的精确性成为关键;对四轮机器人高精度运动控制进行了研究,采用立即回报优先机制和时间差误差优先机制优化深度确定性策略梯度算法;并设计了一种含有两个比例-积分-微分控制器的高精度系统;在搭建底盘运动学模型的基础上,分别为x、y方向设计了独立的PID控制器,并利用优化算法自适应地调整控制器的参数;经实验测试x向上优化算法控制的跟踪误差为0.0976 m,相较于优化前的算法误差降低了9.76%;y向上优化算法的跟踪误差为0.1088 m,优化算法误差较比例-积分-微分控制器减少约48.0%;经设计的控制系统实际应用满足了机器人运动控制工程上的应用,稳态误差和动态误差分别为0.02和0.05;系统误差较小,控制精度高,适合精细控制任务,为机器人高精度运动控制领域提供了新的技术思路。 展开更多
关键词 机器人 PID DDPG 精度 控制系统
在线阅读 下载PDF
基于无人机辅助联邦边缘学习通信系统的安全隐私能效研究
18
作者 卢为党 冯凯 +2 位作者 丁雨 李博 赵楠 《电子与信息学报》 北大核心 2025年第5期1322-1331,共10页
无人机(UAV)辅助联邦边缘学习的通信能够有效解决终端设备数据孤岛问题和数据泄露风险。然而,窃听者可能利用联邦边缘学习中的模型更新来恢复终端设备的原始隐私数据,从而对系统的隐私安全构成极大威胁。为了克服这一挑战,该文在无人机... 无人机(UAV)辅助联邦边缘学习的通信能够有效解决终端设备数据孤岛问题和数据泄露风险。然而,窃听者可能利用联邦边缘学习中的模型更新来恢复终端设备的原始隐私数据,从而对系统的隐私安全构成极大威胁。为了克服这一挑战,该文在无人机辅助联邦边缘学习通信系统提出一种有效的安全聚合和资源优化方案。具体来说,终端设备利用其本地数据进行局部模型训练来更新参数,并将其发送给全局无人机,无人机据此聚合出新的全局模型参数。窃听者试图通过窃听终端设备发送的模型参数信号来恢复终端设备的原始数据。该文通过联合优化终端设备的传输带宽、CPU频率、发送功率以及无人机的CPU频率,最大化安全隐私能效。为了解决该优化问题,该文提出一种演进深度确定性策略梯度(DDPG)算法,通过和系统智能交互,在保证基本时延和能耗需求的情况下获得安全聚合和资源优化方案。最后,通过和基准方案对比,验证了所提方案的有效性。 展开更多
关键词 无人机 联邦边缘学习 能效 资源优化 深度确定性策略梯度
在线阅读 下载PDF
基于深度强化学习的停机位分配
19
作者 向征 吴秋玥 +1 位作者 储同 岳伊杨 《科学技术与工程》 北大核心 2025年第16期6977-6984,共8页
针对停机位分配问题展开系统研究,目标是最小化远机位分配数量以及近机位空闲时间,针对其多目标多约束特性,提出以最小远机位分配数量和最小近机位空闲时间为目标的多目标数学模型,该模型考虑了航班进出港实际时间、机型类别及停机位间... 针对停机位分配问题展开系统研究,目标是最小化远机位分配数量以及近机位空闲时间,针对其多目标多约束特性,提出以最小远机位分配数量和最小近机位空闲时间为目标的多目标数学模型,该模型考虑了航班进出港实际时间、机型类别及停机位间相互关系等参数。结合深度强化学习方法,特别是深度确定性策略梯度算法(deep deterministic policy gradient,DDPG),对停机位分配过程进行优化。为提升算法的寻优能力与性能,设计了改进后的DDPG算法,融入优先级经验回放和多策略融合探索机制。通过对比实验,表明改进后的算法更优,显著减少了最小远机位分配数量并优化了近机位空闲时间,且收敛更快、全局寻优能力更强,充分证实了其有效性。 展开更多
关键词 停机位分配 深度学习 强化学习 深度确定性策略梯度算法(DDPG)
在线阅读 下载PDF
基于改进PPO算法的混合动力汽车能量管理策略
20
作者 马超 孙统 +2 位作者 曹磊 杨坤 胡文静 《河北科技大学学报》 北大核心 2025年第3期237-247,共11页
为了提高功率分流式混合动力汽车(hybrid electric vehicle, HEV)的经济性,建立了HEV整车的纵向动力学模型,并提出了一种基于策略熵优化的改进近端策略优化(proximal policy optimization, PPO)算法的能量管理策略(energy management st... 为了提高功率分流式混合动力汽车(hybrid electric vehicle, HEV)的经济性,建立了HEV整车的纵向动力学模型,并提出了一种基于策略熵优化的改进近端策略优化(proximal policy optimization, PPO)算法的能量管理策略(energy management strategy, EMS)。在一般PPO算法基础上,通过采用经验池机制简化算法框架,只使用1个深度神经网络进行交互训练和更新,以减少策略网络参数同步的复杂性;为了有效探索环境并学习更高效的策略,在损失函数中增加策略熵,以促进智能体在探索与利用之间达到平衡,避免策略过早收敛至局部最优解。结果表明,这种基于单策略网络改进PPO算法的EMS相比于基于双策略网络PPO的EMS,在UDDS工况和NEDC工况下,均能更好地维持电池的荷电状态(state of charge, SOC),同时等效燃油消耗分别降低了8.5%和1.4%,并取得了与基于动态规划(dynamic programming, DP)算法的EMS相近的节能效果。所提改进PPO算法能有效提高HEV的燃油经济性,可为HEV的EMS设计与开发提供参考。 展开更多
关键词 车辆工程 混合动力汽车 能量管理策略 深度强化学习 近端策略优化
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部