为了解决多无人机在三维未知障碍环境中对动态目标追击的路径规划问题,将人工势场法与深度强化学习算法结合,提出一种基于改进dueling deep Q network(Dueling-DQN)的多无人机路径规划算法,用于解决多无人机合作捕捉动态目标的路径规划...为了解决多无人机在三维未知障碍环境中对动态目标追击的路径规划问题,将人工势场法与深度强化学习算法结合,提出一种基于改进dueling deep Q network(Dueling-DQN)的多无人机路径规划算法,用于解决多无人机合作捕捉动态目标的路径规划问题。首先,将人工势场法的思想融入到多无人机合作捕捉动态目标的训练奖励函数中,不仅解决了传统人工势场法复杂环境中表现不佳,易陷入局部最优的问题,同时解决了多无人机合作和无人机复杂环境避障问题。此外,为了使无人机之间能更好合作捕捉动态目标,设计了一种多无人机与动态目标的捕捉逃逸策略。仿真结果表明,与Dueling-DQN算法相比,提出的APF-Dueling-DQN算法有效降低了无人机航迹规划任务过程中发生碰撞的概率,缩短了捕捉动态目标所需规划路径长度。展开更多
针对目前雷达干扰抑制决策智能化程度低的问题,提出了一种基于双深度优先经验回放和可变贪婪算法改进的双重竞争深度Q网络(double dueling deep Q network,D3QN)决策的雷达干扰抑制方法。首先对雷达目标回波和干扰混合信号进行特征提取...针对目前雷达干扰抑制决策智能化程度低的问题,提出了一种基于双深度优先经验回放和可变贪婪算法改进的双重竞争深度Q网络(double dueling deep Q network,D3QN)决策的雷达干扰抑制方法。首先对雷达目标回波和干扰混合信号进行特征提取;然后根据信号特征通过可变贪婪算法选择动作作用于干扰,并将动作前后的信号特征存储于双深度优先经验回放池后,经过学习决策出最优的干扰抑制策略;最后使用该策略抑制干扰后输出。实验结果表明,该方法有效改善了信号的脉压结果,显著提升了信号的信干噪比,相较于基于D3QN的传统干扰抑制方法,在策略准确率和收敛速度上分别提升了7.3%和8.7%。展开更多
通过优化地铁时刻表可有效降低地铁牵引能耗。为解决客流波动和车辆延误对实际节能率影响的问题,提出列车牵引和供电系统实时潮流计算分析模型和基于Dueling Deep Q Network(Dueling DQN)深度强化学习算法相结合的运行图节能优化方法,...通过优化地铁时刻表可有效降低地铁牵引能耗。为解决客流波动和车辆延误对实际节能率影响的问题,提出列车牵引和供电系统实时潮流计算分析模型和基于Dueling Deep Q Network(Dueling DQN)深度强化学习算法相结合的运行图节能优化方法,建立基于区间动态客流概率统计的时刻表迭代优化模型,降低动态客流变化对节能率的影响。对预测Q网络和目标Q网络分别选取自适应时刻估计和均方根反向传播方法,提高模型收敛快速性,同时以时刻表优化前、后总运行时间不变、乘客换乘时间和等待时间最小为优化目标,实现节能时刻表无感切换。以苏州轨道交通4号线为例验证方法的有效性,节能对比试验结果表明:在到达换乘站时刻偏差不超过2 s和列车全周转运行时间不变的前提下,列车牵引节能率达5.27%,车公里能耗下降4.99%。展开更多
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ...At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.展开更多
With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms ...With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms of spatial crowd-sensing,it collects and analyzes traffic sensing data from clients like vehicles and traffic lights to construct intelligent traffic prediction models.Besides collecting sensing data,spatial crowdsourcing also includes spatial delivery services like DiDi and Uber.Appropriate task assignment and worker selection dominate the service quality for spatial crowdsourcing applications.Previous research conducted task assignments via traditional matching approaches or using simple network models.However,advanced mining methods are lacking to explore the relationship between workers,task publishers,and the spatio-temporal attributes in tasks.Therefore,in this paper,we propose a Deep Double Dueling Spatial-temporal Q Network(D3SQN)to adaptively learn the spatialtemporal relationship between task,task publishers,and workers in a dynamic environment to achieve optimal allocation.Specifically,D3SQNis revised through reinforcement learning by adding a spatial-temporal transformer that can estimate the expected state values and action advantages so as to improve the accuracy of task assignments.Extensive experiments are conducted over real data collected fromDiDi and ELM,and the simulation results verify the effectiveness of our proposed models.展开更多
文摘为了解决多无人机在三维未知障碍环境中对动态目标追击的路径规划问题,将人工势场法与深度强化学习算法结合,提出一种基于改进dueling deep Q network(Dueling-DQN)的多无人机路径规划算法,用于解决多无人机合作捕捉动态目标的路径规划问题。首先,将人工势场法的思想融入到多无人机合作捕捉动态目标的训练奖励函数中,不仅解决了传统人工势场法复杂环境中表现不佳,易陷入局部最优的问题,同时解决了多无人机合作和无人机复杂环境避障问题。此外,为了使无人机之间能更好合作捕捉动态目标,设计了一种多无人机与动态目标的捕捉逃逸策略。仿真结果表明,与Dueling-DQN算法相比,提出的APF-Dueling-DQN算法有效降低了无人机航迹规划任务过程中发生碰撞的概率,缩短了捕捉动态目标所需规划路径长度。
文摘针对目前雷达干扰抑制决策智能化程度低的问题,提出了一种基于双深度优先经验回放和可变贪婪算法改进的双重竞争深度Q网络(double dueling deep Q network,D3QN)决策的雷达干扰抑制方法。首先对雷达目标回波和干扰混合信号进行特征提取;然后根据信号特征通过可变贪婪算法选择动作作用于干扰,并将动作前后的信号特征存储于双深度优先经验回放池后,经过学习决策出最优的干扰抑制策略;最后使用该策略抑制干扰后输出。实验结果表明,该方法有效改善了信号的脉压结果,显著提升了信号的信干噪比,相较于基于D3QN的传统干扰抑制方法,在策略准确率和收敛速度上分别提升了7.3%和8.7%。
文摘At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems.
基金supported in part by the Pioneer and Leading Goose R&D Program of Zhejiang Province under Grant 2022C01083 (Dr.Yu Li,https://zjnsf.kjt.zj.gov.cn/)Pioneer and Leading Goose R&D Program of Zhejiang Province under Grant 2023C01217 (Dr.Yu Li,https://zjnsf.kjt.zj.gov.cn/).
文摘With the rapid development ofmobile Internet,spatial crowdsourcing has becomemore andmore popular.Spatial crowdsourcing consists of many different types of applications,such as spatial crowd-sensing services.In terms of spatial crowd-sensing,it collects and analyzes traffic sensing data from clients like vehicles and traffic lights to construct intelligent traffic prediction models.Besides collecting sensing data,spatial crowdsourcing also includes spatial delivery services like DiDi and Uber.Appropriate task assignment and worker selection dominate the service quality for spatial crowdsourcing applications.Previous research conducted task assignments via traditional matching approaches or using simple network models.However,advanced mining methods are lacking to explore the relationship between workers,task publishers,and the spatio-temporal attributes in tasks.Therefore,in this paper,we propose a Deep Double Dueling Spatial-temporal Q Network(D3SQN)to adaptively learn the spatialtemporal relationship between task,task publishers,and workers in a dynamic environment to achieve optimal allocation.Specifically,D3SQNis revised through reinforcement learning by adding a spatial-temporal transformer that can estimate the expected state values and action advantages so as to improve the accuracy of task assignments.Extensive experiments are conducted over real data collected fromDiDi and ELM,and the simulation results verify the effectiveness of our proposed models.