期刊文献+
共找到1,156篇文章
< 1 2 58 >
每页显示 20 50 100
Resource Allocation in V2X Networks:A Double Deep Q-Network Approach with Graph Neural Networks
1
作者 Zhengda Huan Jian Sun +3 位作者 Zeyu Chen Ziyi Zhang Xiao Sun Zenghui Xiao 《Computers, Materials & Continua》 2025年第9期5427-5443,共17页
With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from h... With the advancement of Vehicle-to-Everything(V2X)technology,efficient resource allocation in dynamic vehicular networks has become a critical challenge for achieving optimal performance.Existing methods suffer from high computational complexity and decision latency under high-density traffic and heterogeneous network conditions.To address these challenges,this study presents an innovative framework that combines Graph Neural Networks(GNNs)with a Double Deep Q-Network(DDQN),utilizing dynamic graph structures and reinforcement learning.An adaptive neighbor sampling mechanism is introduced to dynamically select the most relevant neighbors based on interference levels and network topology,thereby improving decision accuracy and efficiency.Meanwhile,the framework models communication links as nodes and interference relationships as edges,effectively capturing the direct impact of interference on resource allocation while reducing computational complexity and preserving critical interaction information.Employing an aggregation mechanism based on the Graph Attention Network(GAT),it dynamically adjusts the neighbor sampling scope and performs attention-weighted aggregation based on node importance,ensuring more efficient and adaptive resource management.This design ensures reliable Vehicle-to-Vehicle(V2V)communication while maintaining high Vehicle-to-Infrastructure(V2I)throughput.The framework retains the global feature learning capabilities of GNNs and supports distributed network deployment,allowing vehicles to extract low-dimensional graph embeddings from local observations for real-time resource decisions.Experimental results demonstrate that the proposed method significantly reduces computational overhead,mitigates latency,and improves resource utilization efficiency in vehicular networks under complex traffic scenarios.This research not only provides a novel solution to resource allocation challenges in V2X networks but also advances the application of DDQN in intelligent transportation systems,offering substantial theoretical significance and practical value. 展开更多
关键词 Resource allocation V2X double deep q-network graph neural network
在线阅读 下载PDF
Convolutional Neural Network-Based Deep Q-Network (CNN-DQN) Resource Management in Cloud Radio Access Network 被引量:2
2
作者 Amjad Iqbal Mau-Luen Tham Yoong Choon Chang 《China Communications》 SCIE CSCD 2022年第10期129-142,共14页
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi... The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach. 展开更多
关键词 energy efficiency(EE) markov decision process(MDP) convolutional neural network(CNN) cloud RAN deep q-network(DQN)
在线阅读 下载PDF
Manufacturing Resource Scheduling Based on Deep Q-Network 被引量:1
3
作者 ZHANG Yufei Zou Yuanhao ZHAO Xiaodong 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2022年第6期531-538,共8页
To optimize machine allocation and task dispatching in smart manufacturing factories, this paper proposes a manufacturing resource scheduling framework based on reinforcement learning(RL). The framework formulates the... To optimize machine allocation and task dispatching in smart manufacturing factories, this paper proposes a manufacturing resource scheduling framework based on reinforcement learning(RL). The framework formulates the entire scheduling process as a multi-stage sequential decision problem, and further obtains the scheduling order by the combination of deep convolutional neural network(CNN) and improved deep Q-network(DQN). Specifically, with respect to the representation of the Markov decision process(MDP), the feature matrix is considered as the state space and a set of heuristic dispatching rules are denoted as the action space. In addition, the deep CNN is employed to approximate the state-action values, and the double dueling deep Qnetwork with prioritized experience replay and noisy network(D3QPN2) is adopted to determine the appropriate action according to the current state. In the experiments, compared with the traditional heuristic method, the proposed method is able to learn high-quality scheduling policy and achieve shorter makespan on the standard public datasets. 展开更多
关键词 smart manufacturing job shop scheduling convolutional neural network deep q-network
原文传递
Multi-Agent Path Planning Method Based on Improved Deep Q-Network in Dynamic Environments 被引量:1
4
作者 LI Shuyi LI Minzhe JING Zhongliang 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第4期601-612,共12页
The multi-agent path planning problem presents significant challenges in dynamic environments,primarily due to the ever-changing positions of obstacles and the complex interactions between agents’actions.These factor... The multi-agent path planning problem presents significant challenges in dynamic environments,primarily due to the ever-changing positions of obstacles and the complex interactions between agents’actions.These factors contribute to a tendency for the solution to converge slowly,and in some cases,diverge altogether.In addressing this issue,this paper introduces a novel approach utilizing a double dueling deep Q-network(D3QN),tailored for dynamic multi-agent environments.A novel reward function based on multi-agent positional constraints is designed,and a training strategy based on incremental learning is performed to achieve collaborative path planning of multiple agents.Moreover,the greedy and Boltzmann probability selection policy is introduced for action selection and avoiding convergence to local extremum.To match radar and image sensors,a convolutional neural network-long short-term memory(CNN-LSTM)architecture is constructed to extract the feature of multi-source measurement as the input of the D3QN.The algorithm’s efficacy and reliability are validated in a simulated environment,utilizing robot operating system and Gazebo.The simulation results show that the proposed algorithm provides a real-time solution for path planning tasks in dynamic scenarios.In terms of the average success rate and accuracy,the proposed method is superior to other deep learning algorithms,and the convergence speed is also improved. 展开更多
关键词 MULTI-AGENT path planning deep reinforcement learning deep q-network
原文传递
Walking Stability Control Method for Biped Robot on Uneven Ground Based on Deep Q-Network
5
作者 Baoling Han Yuting Zhao Qingsheng Luo 《Journal of Beijing Institute of Technology》 EI CAS 2019年第3期598-605,共8页
A gait control method for a biped robot based on the deep Q-network (DQN) algorithm is proposed to enhance the stability of walking on uneven ground. This control strategy is an intelligent learning method of posture ... A gait control method for a biped robot based on the deep Q-network (DQN) algorithm is proposed to enhance the stability of walking on uneven ground. This control strategy is an intelligent learning method of posture adjustment. A robot is taken as an agent and trained to walk steadily on an uneven surface with obstacles, using a simple reward function based on forward progress. The reward-punishment (RP) mechanism of the DQN algorithm is established after obtaining the offline gait which was generated in advance foot trajectory planning. Instead of implementing a complex dynamic model, the proposed method enables the biped robot to learn to adjust its posture on the uneven ground and ensures walking stability. The performance and effectiveness of the proposed algorithm was validated in the V-REP simulation environment. The results demonstrate that the biped robot's lateral tile angle is less than 3° after implementing the proposed method and the walking stability is obviously improved. 展开更多
关键词 DEEP q-network (DQN) BIPED robot uneven ground WALKING STABILITY gait control
在线阅读 下载PDF
Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT
6
作者 Prohim Tam Sa Math +1 位作者 Ahyoung Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第5期3319-3335,共17页
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ... Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput. 展开更多
关键词 Deep q-networks federated learning network functions virtualization quality of service software-defined networking
在线阅读 下载PDF
Transformer-Aided Deep Double Dueling Spatial-Temporal Q-Network for Spatial Crowdsourcing Analysis
7
作者 Yu Li Mingxiao Li +2 位作者 Dongyang Ou Junjie Guo Fangyuan Pan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期893-909,共17页
With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms ... With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms of spatial crowd-sensing,it collects and analyzes traffic sensing data from clients like vehicles and traffic lights to construct intelligent traffic prediction models.Besides collecting sensing data,spatial crowdsourcing also includes spatial delivery services like DiDi and Uber.Appropriate task assignment and worker selection dominate the service quality for spatial crowdsourcing applications.Previous research conducted task assignments via traditional matching approaches or using simple network models.However,advanced mining methods are lacking to explore the relationship between workers,task publishers,and the spatio-temporal attributes in tasks.Therefore,in this paper,we propose a Deep Double Dueling Spatial-temporal Q Network(D3SQN)to adaptively learn the spatialtemporal relationship between task,task publishers,and workers in a dynamic environment to achieve optimal allocation.Specifically,D3SQNis revised through reinforcement learning by adding a spatial-temporal transformer that can estimate the expected state values and action advantages so as to improve the accuracy of task assignments.Extensive experiments are conducted over real data collected fromDiDi and ELM,and the simulation results verify the effectiveness of our proposed models. 展开更多
关键词 Historical behavior analysis spatial crowdsourcing deep double dueling q-networks
在线阅读 下载PDF
Reinforcement Learning with an Ensemble of Binary Action Deep Q-Networks
8
作者 A.M.Hafiz M.Hassaballah +2 位作者 Abdullah Alqahtani Shtwai Alsubai Mohamed Abdel Hameed 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2651-2666,共16页
With the advent of Reinforcement Learning(RL)and its continuous progress,state-of-the-art RL systems have come up for many challenging and real-world tasks.Given the scope of this area,various techniques are found in ... With the advent of Reinforcement Learning(RL)and its continuous progress,state-of-the-art RL systems have come up for many challenging and real-world tasks.Given the scope of this area,various techniques are found in the literature.One such notable technique,Multiple Deep Q-Network(DQN)based RL systems use multiple DQN-based-entities,which learn together and communicate with each other.The learning has to be distributed wisely among all entities in such a scheme and the inter-entity communication protocol has to be carefully designed.As more complex DQNs come to the fore,the overall complexity of these multi-entity systems has increased many folds leading to issues like difficulty in training,need for high resources,more training time,and difficulty in fine-tuning leading to performance issues.Taking a cue from the parallel processing found in the nature and its efficacy,we propose a lightweight ensemble based approach for solving the core RL tasks.It uses multiple binary action DQNs having shared state and reward.The benefits of the proposed approach are overall simplicity,faster convergence and better performance compared to conventional DQN based approaches.The approach can potentially be extended to any type of DQN by forming its ensemble.Conducting extensive experimentation,promising results are obtained using the proposed ensemble approach on OpenAI Gym tasks,and Atari 2600 games as compared to recent techniques.The proposed approach gives a stateof-the-art score of 500 on the Cartpole-v1 task,259.2 on the LunarLander-v2 task,and state-of-the-art results on four out of five Atari 2600 games. 展开更多
关键词 Deep q-networks ensemble learning reinforcement learning OpenAI Gym environments
在线阅读 下载PDF
UAV Autonomous Navigation for Wireless Powered Data Collection with Onboard Deep Q-Network
9
作者 LI Yuting DING Yi +3 位作者 GAO Jiangchuan LIU Yusha HU Jie YANG Kun 《ZTE Communications》 2023年第2期80-87,共8页
In a rechargeable wireless sensor network,utilizing the unmanned aerial vehicle(UAV)as a mobile base station(BS)to charge sensors and collect data effectively prolongs the network’s lifetime.In this paper,we jointly ... In a rechargeable wireless sensor network,utilizing the unmanned aerial vehicle(UAV)as a mobile base station(BS)to charge sensors and collect data effectively prolongs the network’s lifetime.In this paper,we jointly optimize the UAV’s flight trajectory and the sensor selection and operation modes to maximize the average data traffic of all sensors within a wireless sensor network(WSN)during finite UAV’s flight time,while ensuring the energy required for each sensor by wireless power transfer(WPT).We consider a practical scenario,where the UAV has no prior knowledge of sensor locations.The UAV performs autonomous navigation based on the status information obtained within the coverage area,which is modeled as a Markov decision process(MDP).The deep Q-network(DQN)is employed to execute the navigation based on the UAV position,the battery level state,channel conditions and current data traffic of sensors within the UAV’s coverage area.Our simulation results demonstrate that the DQN algorithm significantly improves the network performance in terms of the average data traffic and trajectory design. 展开更多
关键词 unmanned aerial vehicle wireless power transfer deep q-network autonomous navigation
在线阅读 下载PDF
Intelligent and efficient fiber allocation strategy based on the dueling-double-deep Q-network
10
作者 Yong ZHANG Zhipeng YUAN +2 位作者 Jia DING Feng GUO Junyang JIN 《Frontiers of Engineering Management》 2025年第4期721-735,共15页
Fiber allocation in optical cable production is critical for optimizing production efficiency,product quality,and inventory management.However,factors like fiber length and storage time complicate this process,making ... Fiber allocation in optical cable production is critical for optimizing production efficiency,product quality,and inventory management.However,factors like fiber length and storage time complicate this process,making heuristic optimization algorithms inadequate.To tackle these challenges,this paper proposes a new framework:the dueling-double-deep Q-network with twin state-value and action-advantage functions (D3QNTF).First,dual action-advantage and state-value functions are used to prevent overestimation of action values.Second,a method for random initialization of feasible solutions improves sample quality early in the optimization.Finally,a strict penalty for errors is added to the reward mechanism,making the agent more sensitive to and better at avoiding illegal actions,which reduces decision errors.Experimental results show that the proposed method outperforms state-of-the-art algorithms,including greedy algorithms,genetic algorithms,deep Q-networks,double deep Q-networks,and standard dueling-double-deep Q-networks.The findings highlight the potential of the D3QNTF framework for fiber allocation in optical cable production. 展开更多
关键词 optical fiber allocation deep reinforcement learning dueling-double-deep q-network dual action-advantage and state-value functions feasible solutions
原文传递
基于改进深度Q网络的异构无人机快速任务分配
11
作者 王月海 邱国帅 +3 位作者 邢娜 赵欣怡 王婕 韩曦 《工程科学学报》 北大核心 2026年第1期142-151,共10页
随着无人机技术的快速发展,多无人机系统在执行复杂任务时展现出巨大潜力,高效的任务分配策略对提升多无人机系统的整体性能至关重要.然而,传统方法如集中式优化、拍卖算法及鸽群算法等,在面对复杂环境干扰时往往难以生成有效的分配策略... 随着无人机技术的快速发展,多无人机系统在执行复杂任务时展现出巨大潜力,高效的任务分配策略对提升多无人机系统的整体性能至关重要.然而,传统方法如集中式优化、拍卖算法及鸽群算法等,在面对复杂环境干扰时往往难以生成有效的分配策略,为此,本文考虑了环境不确定性如不同风速和降雨量,重点研究了改进的强化学习算法在无人机任务分配中的应用,使多无人机系统能够迅速响应并实现资源的高效利用.首先,本文将无人机任务分配问题建模为马尔可夫决策过程,通过神经网络进行策略逼近用以任务分配中高效处理高维和复杂的状态空间,同时引入优先经验重放机制,有效降低了在线计算的负担.仿真结果表明,与其他强化学习方法相比,该算法具有较强的收敛性.在面对复杂环境时,其鲁棒性更为显著.此外,该算法在处理不同任务时仅需0.24 s即可完成一组适合的无人机分配,并能够快速生成大规模无人机集群的任务分配方案. 展开更多
关键词 无人机群 任务分配 强化学习 深度Q网络 马尔可夫决策过程
在线阅读 下载PDF
End-to-End Autonomous Driving Through Dueling Double Deep Q-Network 被引量:15
12
作者 Baiyu Peng Qi Sun +4 位作者 Shengbo Eben Li Dongsuk Kum Yuming Yin Junqing Wei Tianyu Gu 《Automotive Innovation》 EI CSCD 2021年第3期328-337,共10页
Recent years have seen the rapid development of autonomous driving systems,which are typically designed in a hierarchical architecture or an end-to-end architecture.The hierarchical architecture is always complicated ... Recent years have seen the rapid development of autonomous driving systems,which are typically designed in a hierarchical architecture or an end-to-end architecture.The hierarchical architecture is always complicated and hard to design,while the end-to-end architecture is more promising due to its simple structure.This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network,making it possible for the vehicle to learn end-to-end driving by itself.This paper firstly proposes an architecture for the end-to-end lane-keeping task.Unlike the traditional image-only state space,the presented state space is composed of both camera images and vehicle motion information.Then corresponding dueling neural network structure is introduced,which reduces the variance and improves sampling efficiency.Thirdly,the proposed method is applied to The Open Racing Car Simulator(TORCS)to demonstrate its great performance,where it surpasses human drivers.Finally,the saliency map of the neural network is visualized,which indicates the trained network drives by observing the lane lines.A video for the presented work is available online,https://youtu.be/76ciJ mIHMD8 or https://v.youku.com/v_show/id_XNDM4 ODc0M TM4NA==.html. 展开更多
关键词 End-to-end autonomous driving Reinforcement learning Deep q-network Neural network
原文传递
Deep Q-Network Based Dynamic Trajectory Design for UAV-Aided Emergency Communications 被引量:3
13
作者 Liang Wang Kezhi Wang +2 位作者 Cunhua Pan Xiaomin Chen Nauman Aslam 《Journal of Communications and Information Networks》 CSCD 2020年第4期393-402,共10页
In this paper,an unmanned aerial vehicle(UAV)-aided wireless emergence communication system is studied,where a UAV is deployed to support ground user equipments(UEs)for emergence communications.We aim to maximize the ... In this paper,an unmanned aerial vehicle(UAV)-aided wireless emergence communication system is studied,where a UAV is deployed to support ground user equipments(UEs)for emergence communications.We aim to maximize the number of the UEs served,the fairness,and the overall uplink data rate via optimizing the trajectory of UAV and the transmission power of UEs.We propose a deep Q-network(DQN)based algorithm,which involves the well-known deep neural network(DNN)and Q-learning,to solve the UAV trajectory prob-lem.Then,based on the optimized UAV trajectory,we further propose a successive convex approximation(SCA)based algorithm to tackle the power control problem for each UE.Numerical simulations demonstrate that the proposed DQN based algorithm can achieve considerable performance gain over the existing benchmark algorithms in terms of fairness,the number of UEs served and overall uplink data rate via optimizing UAV’s trajectory and power optimization. 展开更多
关键词 deep reinforcement learning deep q-network(DQN) successive convex approximation(SCA) UAV power control
原文传递
A traffic-aware Q-network enhanced routing protocol based on GPSR for unmanned aerial vehicle ad-hoc networks 被引量:1
14
作者 Yi-ning CHEN Ni-qi LV +2 位作者 Guang-hua SONG Bo-wei YANG Xiao-hong JIANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2020年第9期1308-1320,共13页
In dense traffic unmanned aerial vehicle(UAV)ad-hoc networks,traffic congestion can cause increased delay and packet loss,which limit the performance of the networks;therefore,a traffic balancing strategy is required ... In dense traffic unmanned aerial vehicle(UAV)ad-hoc networks,traffic congestion can cause increased delay and packet loss,which limit the performance of the networks;therefore,a traffic balancing strategy is required to control the traffic.In this study,we propose TQNGPSR,a traffic-aware Q-network enhanced geographic routing protocol based on greedy perimeter stateless routing(GPSR),for UAV ad-hoc networks.The protocol enforces a traffic balancing strategy using the congestion information of neighbors,and evaluates the quality of a wireless link by the Q-network algorithm,which is a reinforcement learning algorithm.Based on the evaluation of each wireless link,the protocol makes routing decisions in multiple available choices to reduce delay and decrease packet loss.We simulate the performance of TQNGPSR and compare it with AODV,OLSR,GPSR,and QNGPSR.Simulation results show that TQNGPSR obtains higher packet delivery ratios and lower end-to-end delays than GPSR and QNGPSR.In high node density scenarios,it also outperforms AODV and OLSR in terms of the packet delivery ratio,end-to-end delay,and throughput. 展开更多
关键词 Traffic balancing Reinforcement learning Geographic routing q-network
原文传递
Intelligent Voltage Control Method in Active Distribution Networks Based on Averaged Weighted Double Deep Q-network Algorithm 被引量:1
15
作者 Yangyang Wang Meiqin Mao +1 位作者 Liuchen Chang Nikos D.Hatziargyriou 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2023年第1期132-143,共12页
High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control... High penetration of distributed renewable energy sources and electric vehicles(EVs)makes future active distribution network(ADN)highly variable.These characteristics put great challenges to traditional voltage control methods.Voltage control based on the deep Q-network(DQN)algorithm offers a potential solution to this problem because it possesses humanlevel control performance.However,the traditional DQN methods may produce overestimation of action reward values,resulting in degradation of obtained solutions.In this paper,an intelligent voltage control method based on averaged weighted double deep Q-network(AWDDQN)algorithm is proposed to overcome the shortcomings of overestimation of action reward values in DQN algorithm and underestimation of action reward values in double deep Q-network(DDQN)algorithm.Using the proposed method,the voltage control objective is incorporated into the designed action reward values and normalized to form a Markov decision process(MDP)model which is solved by the AWDDQN algorithm.The designed AWDDQN-based intelligent voltage control agent is trained offline and used as online intelligent dynamic voltage regulator for the ADN.The proposed voltage control method is validated using the IEEE 33-bus and 123-bus systems containing renewable energy sources and EVs,and compared with the DQN and DDQN algorithms based methods,and traditional mixed-integer nonlinear program based methods.The simulation results show that the proposed method has better convergence and less voltage volatility than the other ones. 展开更多
关键词 Averaged weighted double deep q-network(AWDDQN) deep Q learning active distribution network(ADN) voltage control electrical vehicle(EV)
原文传递
基于改进深度Q网络的无预测风电场日前拓扑优化 被引量:2
16
作者 黄晟 潘丽君 +3 位作者 屈尹鹏 周歧林 徐箭 柯德平 《电力系统自动化》 北大核心 2025年第2期122-132,共11页
风电场受风速变化等因素影响,出力易产生大幅波动,从而造成电压波动和网损增加等问题,影响风电场的安全高效运行。目前的风电场日前调控方案多基于传统的数学优化模型展开,且需要风机的日前出力预测数据,故无法完全避免的日前预测误差... 风电场受风速变化等因素影响,出力易产生大幅波动,从而造成电压波动和网损增加等问题,影响风电场的安全高效运行。目前的风电场日前调控方案多基于传统的数学优化模型展开,且需要风机的日前出力预测数据,故无法完全避免的日前预测误差的引入造成日前优化调控方案有效性的降低,增加了日内风机调控的难度。因此,文中充分发挥强化学习模型的决策能力,提出了一种基于改进深度Q网络(DQN)的无预测风电场拓扑重构决策方案,并以DQN为框架展开。首先,构建基于历史数据的状态空间;然后,提出基于生成树的动作价值对解耦的动作空间优化方法,以最小化电压偏差和网损为目标建立优化评价体系,完成由历史实际出力数据到决策的映射关系构建,在避免引入预测误差的情况下实现风电场日前优化调控;最后,设计一种基于多层次经验指导的经验回放策略,提升算法的训练性能,保证算法的适用性。根据实际的风电运行数据进行仿真,通过对比分析改进技术对DQN算法的影响和优化调控前后风电场的运行状态,验证了所提方法的创新性和有效性。 展开更多
关键词 风电场 预测 深度Q网络 拓扑重构 电压控制 优化 强化学习
在线阅读 下载PDF
基于角度搜索和深度Q网络的移动机器人路径规划算法 被引量:3
17
作者 李宗刚 韩森 +1 位作者 陈引娟 宁小刚 《兵工学报》 北大核心 2025年第2期30-44,共15页
针对深度Q网络(Deep Q Network,DQN)算法在求解路径规划问题时存在学习时间长、收敛速度慢的局限性,提出一种角度搜索(Angle Searching,AS)和DQN相结合的算法(Angle Searching-Deep Q Network,AS-DQN),通过规划搜索域,控制移动机器人的... 针对深度Q网络(Deep Q Network,DQN)算法在求解路径规划问题时存在学习时间长、收敛速度慢的局限性,提出一种角度搜索(Angle Searching,AS)和DQN相结合的算法(Angle Searching-Deep Q Network,AS-DQN),通过规划搜索域,控制移动机器人的搜索方向,减少栅格节点的遍历,提高路径规划的效率。为加强移动机器人之间的协作能力,提出一种物联网信息融合技术(Internet Information Fusion Technology,IIFT)模型,能够将多个分散的局部环境信息整合为全局信息,指导移动机器人规划路径。仿真实验结果表明:与标准DQN算法相比,AS-DQN算法可以缩短移动机器人寻得到达目标点最优路径的时间,将IIFT模型与AS-DQN算法相结合路径规划效率更加显著。实体实验结果表明:AS-DQN算法能够应用于Turtlebot3无人车,并成功找到起点至目标点的最优路径。 展开更多
关键词 移动机器人 路径规划 深度Q网络 角度搜索策略 物联网信息融合技术
在线阅读 下载PDF
基于双深度Q网络的车联网安全位置路由 被引量:2
18
作者 米洪 郑莹 《无线电通信技术》 北大核心 2025年第1期96-105,共10页
作为智能交通系统中的支撑技术,车联网(Internet of Vehicle,IoV)已受到广泛关注。由于IoV网络拓扑结构的动态变化以及灰洞攻击,构建稳定的安全位置路由是一项挑战工作。为此,提出基于双深度Q网络的安全位置路由(Double DQN-based Secur... 作为智能交通系统中的支撑技术,车联网(Internet of Vehicle,IoV)已受到广泛关注。由于IoV网络拓扑结构的动态变化以及灰洞攻击,构建稳定的安全位置路由是一项挑战工作。为此,提出基于双深度Q网络的安全位置路由(Double DQN-based Secure Location Routing,DSLR)。DSLR通过防御灰洞攻击提升消息传递率(Message Delivery Ratio,MDR),并降低消息的传输时延。构建以丢包率和链路连通时间为约束条件的优化问题,利用双深度Q网络算法求解。为了提升DSLR的收敛性,基于连通时间、丢包率和传输时延构建奖励函数,引导智能体选择满足要求的转发节点。采用动态的探索因子机制,平衡探索与利用间的关系,进而加速算法的收敛。仿真结果表明,相比于同类算法,提出的DSLR提升了MDR,减少了传输时延。 展开更多
关键词 车联网 位置路由 灰洞攻击 双深度Q网络 动态的探索因子
在线阅读 下载PDF
计及电力-通信-交通耦合网络不确定性的虚拟电厂鲁棒优化调度 被引量:1
19
作者 潘超 李梓铭 +3 位作者 龚榆淋 叶宇鸿 孙中伟 周振宇 《电工技术学报》 北大核心 2025年第15期4755-4769,共15页
电力-通信-交通耦合网络中,虚拟电厂(VPP)通过先进的控制、通信、信息采集技术对分布式资源进行规模化聚合调控并积极响应电网需求,能够提高电网运行的稳定性。然而,现有VPP优化调度方法忽略了电力-通信-交通耦合网络中不确定性因素对VP... 电力-通信-交通耦合网络中,虚拟电厂(VPP)通过先进的控制、通信、信息采集技术对分布式资源进行规模化聚合调控并积极响应电网需求,能够提高电网运行的稳定性。然而,现有VPP优化调度方法忽略了电力-通信-交通耦合网络中不确定性因素对VPP需求响应优化调度的影响,导致调度成本高、鲁棒性差。针对上述问题,首先,该文构建电力-通信-交通耦合网络模型,并以最小化网损、节点电压偏差、VPP经济成本的加权和为目标建立优化问题;其次,分析来自电力-通信-交通三个网络的不确定性因素,并构建计及电力-通信-交通耦合网络不确定性的VPP鲁棒优化调度问题;然后,提出一种基于联邦对抗深度Q网络(DQN)的VPP鲁棒优化调度求解算法,通过双智能体之间的不断迭代,实现鲁棒最优策略的对抗求解;最后,对所提算法进行仿真验证,仿真结果表明,所提算法能够有效降低不确定性因素对VPP优化调度的影响,提高电网运行的可靠性与稳定性。 展开更多
关键词 电力-通信-交通 虚拟电厂 不确定性 鲁棒优化调度 联邦对抗深度Q网络(DQN)
在线阅读 下载PDF
基于DQN算法的直流微电网负载接口变换器自抗扰控制策略 被引量:2
20
作者 周雪松 韩静 +3 位作者 马幼捷 陶珑 问虎龙 赵明 《电力系统保护与控制》 北大核心 2025年第1期95-103,共9页
在直流微电网中,为了保证直流母线与负载之间能量流动的稳定性,解决在能量流动中不确定因素产生的扰动问题。在建立DC-DC变换器数学模型的基础上,设计了一种基于深度强化学习的DC-DC变换器自抗扰控制策略。利用线性扩张观测器对总扰动... 在直流微电网中,为了保证直流母线与负载之间能量流动的稳定性,解决在能量流动中不确定因素产生的扰动问题。在建立DC-DC变换器数学模型的基础上,设计了一种基于深度强化学习的DC-DC变换器自抗扰控制策略。利用线性扩张观测器对总扰动的估计补偿和线性误差反馈控制特性对自抗扰控制器结构进行简化设计,并结合深度强化学习对其控制器参数进行在线优化。根据不同工况下的负载侧电压波形,分析了DC-DC变换器在该控制策略、线性自抗扰控制与比例积分控制下的稳定性、抗扰性和鲁棒性,验证了该控制策略的正确性和有效性。最后,在参数摄动下进行了蒙特卡洛实验,仿真结果表明该控制策略具有较好的鲁棒性。 展开更多
关键词 直流微电网 深度强化学习 DQN算法 DC-DC变换器 线性自抗扰控制
在线阅读 下载PDF
上一页 1 2 58 下一页 到第
使用帮助 返回顶部