As an advanced combat weapon,Unmanned Aerial Vehicles(UAVs)have been widely used in military wars.In this paper,we formulated the Autonomous Navigation Control(ANC)problem of UAVs as a Markov Decision Process(MDP)and ...As an advanced combat weapon,Unmanned Aerial Vehicles(UAVs)have been widely used in military wars.In this paper,we formulated the Autonomous Navigation Control(ANC)problem of UAVs as a Markov Decision Process(MDP)and proposed a novel Deep Reinforcement Learning(DRL)method to allow UAVs to perform dynamic target tracking tasks in large-scale unknown environments.To solve the problem of limited training experience,the proposed Imaginary Filtered Hindsight Experience Replay(IFHER)generates successful episodes by reasonably imagining the target trajectory in the failed episode to augment the experiences.The welldesigned goal,episode,and quality filtering strategies ensure that only high-quality augmented experiences can be stored,while the sampling filtering strategy of IFHER ensures that these stored augmented experiences can be fully learned according to their high priorities.By training in a complex environment constructed based on the parameters of a real UAV,the proposed IFHER algorithm improves the convergence speed by 28.99%and the convergence result by 11.57%compared to the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm.The testing experiments carried out in environments with different complexities demonstrate the strong robustness and generalization ability of the IFHER agent.Moreover,the flight trajectory of the IFHER agent shows the superiority of the learned policy and the practical application value of the algorithm.展开更多
It is suggested that hindsight becomes an obstacle to the objective investigation of an accident, and that the proper countermeasures for the prevention of such an accident is impossible if we view the accident with h...It is suggested that hindsight becomes an obstacle to the objective investigation of an accident, and that the proper countermeasures for the prevention of such an accident is impossible if we view the accident with hindsight. Therefore, it is important for organizational managers to prevent hindsight from occurring so that hindsight does not hinder objective and proper measures to be taken and this does not lead to a serious accident. In this study, a basic phenomenon potentially related to accidents, that is, hindsight was taken up, and an attempt was made to explore the phenomenon in order to get basically insights into the prevention of accidents caused by such a cognitive bias.展开更多
Sparse rewards pose significant challenges in deep reinforcement learning as agents struggle to learn from experiences with limited reward signals.Hindsight experience replay(HER)addresses this problem by creating“sm...Sparse rewards pose significant challenges in deep reinforcement learning as agents struggle to learn from experiences with limited reward signals.Hindsight experience replay(HER)addresses this problem by creating“small goals”within a hierarchical decision model.However,HER does not consider the value of different episodes for agent learning.In this paper,we propose SPAHER,a framework for prioritizing hindsight experiences based on spatial position attention.SPAHER allows the agent to prioritize more valuable experiences in a manipulation task.It achieves this by calculating transition and trajectory spatial position functions to determine the value of each episode for experience replays.We evaluate SPAHER on eight robot manipulation tasks in the Fetch and Hand environments provided by OpenAI Gym.Simulation results show that our method improves the final mean success rate by an average of 3.63%compared to HER,especially in challenging Hand environments.Notably,these improvements are achieved without any increase in computation time.展开更多
面对多障碍、大尺寸障碍、狭窄通道等特殊环境下的USV路径规划问题,快速扩展随机树算法(rapidly-exploring random trees,RRT)存在采样基数大、规划成功率低、规划路径曲折等缺点。基于双延迟深度确定性策略梯度(twin delayed deep dete...面对多障碍、大尺寸障碍、狭窄通道等特殊环境下的USV路径规划问题,快速扩展随机树算法(rapidly-exploring random trees,RRT)存在采样基数大、规划成功率低、规划路径曲折等缺点。基于双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient,TD3)提出一种全局路径规划算法(TD3-RRT)。结合RRT算法与深度强化学习建立USV路径搜索模型,利用前视探测感知环境以自适应调整扩展步长,通过策略网络输出路径搜索方向,解决RRT算法扩展盲目的问题;改进后见经验回放策略,通过重选虚拟目标、双经验回放池采样等策略以增强复杂环境下路径搜索能力;通过奖励函数提高规划路径质量,加快路径搜索速度。实验结果表明:不同环境下TD3-RRT相比当前主流算法能够有效提高规划成功率,优化转向角度、路径长度和规划时间,证明了改进算法能有效加快路径搜索速度并提高路径质量,且对不同环境具有良好适应性。展开更多
基金co-supported by the National Natural Science Foundation of China(Nos.62003267 and 61573285)the Natural Science Basic Research Plan in Shaanxi Province of China(No.2020JQ-220)+1 种基金the Open Project of Science and Technology on Electronic Information Control Laboratory,China(No.JS20201100339)the Open Project of Science and Technology on Electromagnetic Space Operations and Applications Laboratory,China(No.JS20210586512).
文摘As an advanced combat weapon,Unmanned Aerial Vehicles(UAVs)have been widely used in military wars.In this paper,we formulated the Autonomous Navigation Control(ANC)problem of UAVs as a Markov Decision Process(MDP)and proposed a novel Deep Reinforcement Learning(DRL)method to allow UAVs to perform dynamic target tracking tasks in large-scale unknown environments.To solve the problem of limited training experience,the proposed Imaginary Filtered Hindsight Experience Replay(IFHER)generates successful episodes by reasonably imagining the target trajectory in the failed episode to augment the experiences.The welldesigned goal,episode,and quality filtering strategies ensure that only high-quality augmented experiences can be stored,while the sampling filtering strategy of IFHER ensures that these stored augmented experiences can be fully learned according to their high priorities.By training in a complex environment constructed based on the parameters of a real UAV,the proposed IFHER algorithm improves the convergence speed by 28.99%and the convergence result by 11.57%compared to the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient(TD3)algorithm.The testing experiments carried out in environments with different complexities demonstrate the strong robustness and generalization ability of the IFHER agent.Moreover,the flight trajectory of the IFHER agent shows the superiority of the learned policy and the practical application value of the algorithm.
文摘It is suggested that hindsight becomes an obstacle to the objective investigation of an accident, and that the proper countermeasures for the prevention of such an accident is impossible if we view the accident with hindsight. Therefore, it is important for organizational managers to prevent hindsight from occurring so that hindsight does not hinder objective and proper measures to be taken and this does not lead to a serious accident. In this study, a basic phenomenon potentially related to accidents, that is, hindsight was taken up, and an attempt was made to explore the phenomenon in order to get basically insights into the prevention of accidents caused by such a cognitive bias.
基金supported by the Natural Science Foundation of Shaanxi Province,China(No.2022JQ-661)the Project of Science and Technology Development Plan in Hangzhou,China(No.202202B38)the Xidian-FIAS International Joint Research Center,China.
文摘Sparse rewards pose significant challenges in deep reinforcement learning as agents struggle to learn from experiences with limited reward signals.Hindsight experience replay(HER)addresses this problem by creating“small goals”within a hierarchical decision model.However,HER does not consider the value of different episodes for agent learning.In this paper,we propose SPAHER,a framework for prioritizing hindsight experiences based on spatial position attention.SPAHER allows the agent to prioritize more valuable experiences in a manipulation task.It achieves this by calculating transition and trajectory spatial position functions to determine the value of each episode for experience replays.We evaluate SPAHER on eight robot manipulation tasks in the Fetch and Hand environments provided by OpenAI Gym.Simulation results show that our method improves the final mean success rate by an average of 3.63%compared to HER,especially in challenging Hand environments.Notably,these improvements are achieved without any increase in computation time.
文摘面对多障碍、大尺寸障碍、狭窄通道等特殊环境下的USV路径规划问题,快速扩展随机树算法(rapidly-exploring random trees,RRT)存在采样基数大、规划成功率低、规划路径曲折等缺点。基于双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient,TD3)提出一种全局路径规划算法(TD3-RRT)。结合RRT算法与深度强化学习建立USV路径搜索模型,利用前视探测感知环境以自适应调整扩展步长,通过策略网络输出路径搜索方向,解决RRT算法扩展盲目的问题;改进后见经验回放策略,通过重选虚拟目标、双经验回放池采样等策略以增强复杂环境下路径搜索能力;通过奖励函数提高规划路径质量,加快路径搜索速度。实验结果表明:不同环境下TD3-RRT相比当前主流算法能够有效提高规划成功率,优化转向角度、路径长度和规划时间,证明了改进算法能有效加快路径搜索速度并提高路径质量,且对不同环境具有良好适应性。