By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning...By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.展开更多
船舶在自动靠泊过程中会受到风、浪、流和岸壁效应等因素的影响,故需要精确的路径规划方法防止靠泊失败。针对全驱动船舶靠泊过程的基于双深度Q网络(double deep Q network,DDQN)算法,设计了一种船舶自动靠泊路径规划方法。首先建立船...船舶在自动靠泊过程中会受到风、浪、流和岸壁效应等因素的影响,故需要精确的路径规划方法防止靠泊失败。针对全驱动船舶靠泊过程的基于双深度Q网络(double deep Q network,DDQN)算法,设计了一种船舶自动靠泊路径规划方法。首先建立船舶三自由度模型,然后通过将距离、航向、推力、时间和碰撞作为奖励或惩罚,改进奖励函数。随后引入DDQN来学习动作奖励模型,并使用学习结果来操纵船舶运动。通过追求更高的奖励值,船舶可以自行找到最优的靠泊路径。实验结果表明,在不同水流速度下,船舶都可以在完成靠泊的同时减小时间和推力,并且在相同水流速度下,DDQN算法与Q-learning、SARSA(state action reward state action)、深度Q网络(deep Q network,DQN)等算法相比,靠泊过程推力分别减小了241.940、234.614、80.202 N,且时间仅为252.485 s。展开更多
Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed ...Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed satellites,ground users now can be covered by multiple visible satellites,but also face complex handover issues with such massive high-mobility satellites in multi-layer.The end-to-end routing is also affected by the handover behavior.In this paper,we propose an intelligent handover strategy dedicated to multi-layer LEO mega-constellation networks.Firstly,an analytic model is utilized to rapidly estimate the end-to-end propagation latency as a key handover factor to construct a multi-objective optimization model.Subsequently,an intelligent handover strategy is proposed by employing the Dueling Double Deep Q Network(D3QN)-based deep reinforcement learning algorithm for single-layer constellations.Moreover,an optimal crosslayer handover scheme is proposed by predicting the latency-jitter and minimizing the cross-layer overhead.Simulation results demonstrate the superior performance of the proposed method in the multi-layer LEO mega-constellation,showcasing reductions of up to 8.2%and 59.5%in end-to-end latency and jitter respectively,when compared to the existing handover strategies.展开更多
Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical ...Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge.This study proposes an edge task scheduling approach based on an improved Double Deep Q Network(DQN),which is adopted to separate the calculations of target Q values and the selection of the action in two networks.A new reward function is designed,and a control unit is added to the experience replay unit of the agent.The management of experience data are also modified to fully utilize its value and improve learning efficiency.Reinforcement learning agents usually learn from an ignorant state,which is inefficient.As such,this study proposes a novel particle swarm optimization algorithm with an improved fitness function,which can generate optimal solutions for task scheduling.These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level.The proposed algorithm is compared with six other methods in simulation experiments.Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.展开更多
基金funded by National Natural Science Foundation of China(No.62063006)Guangxi Science and Technology Major Program(No.2022AA05002)+1 种基金Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region(No.2022GXZDSY003)Central Leading Local Science and Technology Development Fund Project of Wuzhou(No.202201001).
文摘By integrating deep neural networks with reinforcement learning,the Double Deep Q Network(DDQN)algorithm overcomes the limitations of Q-learning in handling continuous spaces and is widely applied in the path planning of mobile robots.However,the traditional DDQN algorithm suffers from sparse rewards and inefficient utilization of high-quality data.Targeting those problems,an improved DDQN algorithm based on average Q-value estimation and reward redistribution was proposed.First,to enhance the precision of the target Q-value,the average of multiple previously learned Q-values from the target Q network is used to replace the single Q-value from the current target Q network.Next,a reward redistribution mechanism is designed to overcome the sparse reward problem by adjusting the final reward of each action using the round reward from trajectory information.Additionally,a reward-prioritized experience selection method is introduced,which ranks experience samples according to reward values to ensure frequent utilization of high-quality data.Finally,simulation experiments are conducted to verify the effectiveness of the proposed algorithm in fixed-position scenario and random environments.The experimental results show that compared to the traditional DDQN algorithm,the proposed algorithm achieves shorter average running time,higher average return and fewer average steps.The performance of the proposed algorithm is improved by 11.43%in the fixed scenario and 8.33%in random environments.It not only plans economic and safe paths but also significantly improves efficiency and generalization in path planning,making it suitable for widespread application in autonomous navigation and industrial automation.
基金supported by the National Natural Science Foundation of China(No.62401597)Natural Science Foundation of Hunan Province,China(No.2024JJ6469)the Research Project of National University of Defense Technology,China(No.ZK22-02).
文摘Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed satellites,ground users now can be covered by multiple visible satellites,but also face complex handover issues with such massive high-mobility satellites in multi-layer.The end-to-end routing is also affected by the handover behavior.In this paper,we propose an intelligent handover strategy dedicated to multi-layer LEO mega-constellation networks.Firstly,an analytic model is utilized to rapidly estimate the end-to-end propagation latency as a key handover factor to construct a multi-objective optimization model.Subsequently,an intelligent handover strategy is proposed by employing the Dueling Double Deep Q Network(D3QN)-based deep reinforcement learning algorithm for single-layer constellations.Moreover,an optimal crosslayer handover scheme is proposed by predicting the latency-jitter and minimizing the cross-layer overhead.Simulation results demonstrate the superior performance of the proposed method in the multi-layer LEO mega-constellation,showcasing reductions of up to 8.2%and 59.5%in end-to-end latency and jitter respectively,when compared to the existing handover strategies.
基金supported by the National Key Research and Development Program of China(No.2021YFE0116900)National Natural Science Foundation of China(Nos.42275157,62002276,and 41975142)Major Program of the National Social Science Fund of China(No.17ZDA092).
文摘Edge computing nodes undertake an increasing number of tasks with the rise of business density.Therefore,how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge.This study proposes an edge task scheduling approach based on an improved Double Deep Q Network(DQN),which is adopted to separate the calculations of target Q values and the selection of the action in two networks.A new reward function is designed,and a control unit is added to the experience replay unit of the agent.The management of experience data are also modified to fully utilize its value and improve learning efficiency.Reinforcement learning agents usually learn from an ignorant state,which is inefficient.As such,this study proposes a novel particle swarm optimization algorithm with an improved fitness function,which can generate optimal solutions for task scheduling.These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level.The proposed algorithm is compared with six other methods in simulation experiments.Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.