This study addresses the maneuver evasion problem for medium-to-long-range air-to-air missiles by proposing a KAN-λ-PPO-based evasion algorithm.The algorithm introduces Kolmogorov-Arnold Networks(KAN)to mitigate the ...This study addresses the maneuver evasion problem for medium-to-long-range air-to-air missiles by proposing a KAN-λ-PPO-based evasion algorithm.The algorithm introduces Kolmogorov-Arnold Networks(KAN)to mitigate the catastrophic forgetting issue of Multilayer Perceptrons(MLP)in continual learning,while incorporatingλ-return to resolve sparse reward challenges in evasion scenarios.First,we model the evasion problem withλ-return and present the KAN-λ-PPO algorithm.Subsequently,we establish game environments based on the segmented ballistic characteristics of medium and long range missiles.During training,a joint reward function is designed by combining the miss distance and positional advantages to train the agent.Experiments evaluate four dimensions:(1)Performance comparison between KAN and MLP in value function approximation;(2)Catastrophic forgetting mitigation of KAN-λ-PPO in dual-task scenarios;(3)Continual learning capabilities across multiple evasion scenarios;(4)Quantitative analysis of agent strategy evolution and positional advantages.Empirical results demonstrate that KAN improves value function approximation accuracy by an order of magnitude compared with traditional MLP architectures.In continual learning tasks,the KAN-λ-PPO scheme exhibits significant knowledge retention,achieving performance improvements of 32.7% and 8.6%over MLP baselines in Task1→2 and Task2→3 transitions,respectively.Furthermore,the learned maneuver strategies outperform High-G Barrel Rolls(HGB)and S-maneuver tactics in securing positional advantages while accomplishing evasion.展开更多
Nowadays,Unmanned Aerial Vehicles(UAVs)are making increasingly important contributions to numerous applications that enhance human quality of life,such as sensing and data collection,computing,and communication.Howeve...Nowadays,Unmanned Aerial Vehicles(UAVs)are making increasingly important contributions to numerous applications that enhance human quality of life,such as sensing and data collection,computing,and communication.However,communication between UAVs still faces challenges due to high-dynamic topology,volatile wireless links,and strict energy budgets.In this work,we introduce an improved communication scheme,namely Proximal Policy Optimization(PPO).Our solution casts hop–by–hop relay selection as aMarkov decision process and develops a decentralized Proximal Policy Optimization framework in an actor–critic form.Akey novelty is the design of the reward function,which jointly considers the delivery ratio,end-to-end delay,and energy efficiency,enabling flexible prioritization in dynamic environments.The simulation results across swarms of 20–70 UAVs show that,the proposed framework enhances delivery ratio to 5%over a Deep Q-Network baseline(reaching≈80%at 70 nodes),reduces latency by about 2–3ms inmedium-to-dense settings(from∼43 to 35–36ms),and attains comparable or slightly lower total energy consumption(typically 0.5%–2%lower).The results indicate that the proposed communication scheme,adaptive and scalable learning-based UAV scenarios,pave the way for re-world UAV deployments.展开更多
煤矿巷道支护装备的自动化与智能化水平较低,制约了煤矿巷道的成形效率,是造成“采掘失衡”的关键原因。为解决煤矿巷道支护装备自动化程度低、支护效率差的问题,针对一种集成悬臂式掘进机和多自由度机械臂的钻锚机器人,提出了一种基于...煤矿巷道支护装备的自动化与智能化水平较低,制约了煤矿巷道的成形效率,是造成“采掘失衡”的关键原因。为解决煤矿巷道支护装备自动化程度低、支护效率差的问题,针对一种集成悬臂式掘进机和多自由度机械臂的钻锚机器人,提出了一种基于深度强化学习的钻锚机器人机械臂路径规划方法。在虚拟环境中构建煤矿巷道环境,并建立机械臂与机身、煤壁以及支护钢带的碰撞检测模型,使用层次包围盒法在虚拟环境进行碰撞检测,形成煤矿巷道边界受限情况下的避障策略。在近端策略优化(Proximal Policy Optimization,PPO)算法的基础上结合多方面因素提出改进。考虑到多自由度机械臂状态空间输入长度不固定的情况,引入长短记忆神经网络(Long Short Term Memory,LSTM)的环境状态输入处理方法,可以提升算法对环境的适应能力。并且在奖惩稀疏的情况下引入了好奇心机制(Intrinsic Curiosity Module,ICM),通过给予内在奖励鼓励智能体更大程度地探索环境。基于奖惩机制建立智能体,根据钻锚机器人的运动特性定义其状态空间与动作空间,在同一场景下分别使用2种算法对智能体进行训练,综合奖励值、回合步数、Actor网络损失值、Critic网络损失值等指标进行对比分析,最后经过仿真消融实验测试对比。实验结果表明,在原始PPO算法不能完成任务的情况下,改进后的算法路径长度比同样能完成任务的PPO-ICM算法缩短了3.98%,所用时间缩短了25.6%。为进一步验证改进后算法的鲁棒性,设计多组实验,改进后的PPO算法均完成路径规划任务,路径终点与目标位置的距离误差在3.88 cm之内,锚杆与竖直方向夹角误差在3°以内,能够有效完成路径规划任务,提升煤矿巷道支护系统的自动化程度。结果验证了所提方法在煤矿井下巷道支护时锚孔位置多变的情况下钻锚机器人多自由度机械臂在路径规划的可行性与有效性。展开更多
文摘This study addresses the maneuver evasion problem for medium-to-long-range air-to-air missiles by proposing a KAN-λ-PPO-based evasion algorithm.The algorithm introduces Kolmogorov-Arnold Networks(KAN)to mitigate the catastrophic forgetting issue of Multilayer Perceptrons(MLP)in continual learning,while incorporatingλ-return to resolve sparse reward challenges in evasion scenarios.First,we model the evasion problem withλ-return and present the KAN-λ-PPO algorithm.Subsequently,we establish game environments based on the segmented ballistic characteristics of medium and long range missiles.During training,a joint reward function is designed by combining the miss distance and positional advantages to train the agent.Experiments evaluate four dimensions:(1)Performance comparison between KAN and MLP in value function approximation;(2)Catastrophic forgetting mitigation of KAN-λ-PPO in dual-task scenarios;(3)Continual learning capabilities across multiple evasion scenarios;(4)Quantitative analysis of agent strategy evolution and positional advantages.Empirical results demonstrate that KAN improves value function approximation accuracy by an order of magnitude compared with traditional MLP architectures.In continual learning tasks,the KAN-λ-PPO scheme exhibits significant knowledge retention,achieving performance improvements of 32.7% and 8.6%over MLP baselines in Task1→2 and Task2→3 transitions,respectively.Furthermore,the learned maneuver strategies outperform High-G Barrel Rolls(HGB)and S-maneuver tactics in securing positional advantages while accomplishing evasion.
基金funded byHung YenUniversity of Technology and Education under grant number UTEHY.L.2026.05.
文摘Nowadays,Unmanned Aerial Vehicles(UAVs)are making increasingly important contributions to numerous applications that enhance human quality of life,such as sensing and data collection,computing,and communication.However,communication between UAVs still faces challenges due to high-dynamic topology,volatile wireless links,and strict energy budgets.In this work,we introduce an improved communication scheme,namely Proximal Policy Optimization(PPO).Our solution casts hop–by–hop relay selection as aMarkov decision process and develops a decentralized Proximal Policy Optimization framework in an actor–critic form.Akey novelty is the design of the reward function,which jointly considers the delivery ratio,end-to-end delay,and energy efficiency,enabling flexible prioritization in dynamic environments.The simulation results across swarms of 20–70 UAVs show that,the proposed framework enhances delivery ratio to 5%over a Deep Q-Network baseline(reaching≈80%at 70 nodes),reduces latency by about 2–3ms inmedium-to-dense settings(from∼43 to 35–36ms),and attains comparable or slightly lower total energy consumption(typically 0.5%–2%lower).The results indicate that the proposed communication scheme,adaptive and scalable learning-based UAV scenarios,pave the way for re-world UAV deployments.
文摘煤矿巷道支护装备的自动化与智能化水平较低,制约了煤矿巷道的成形效率,是造成“采掘失衡”的关键原因。为解决煤矿巷道支护装备自动化程度低、支护效率差的问题,针对一种集成悬臂式掘进机和多自由度机械臂的钻锚机器人,提出了一种基于深度强化学习的钻锚机器人机械臂路径规划方法。在虚拟环境中构建煤矿巷道环境,并建立机械臂与机身、煤壁以及支护钢带的碰撞检测模型,使用层次包围盒法在虚拟环境进行碰撞检测,形成煤矿巷道边界受限情况下的避障策略。在近端策略优化(Proximal Policy Optimization,PPO)算法的基础上结合多方面因素提出改进。考虑到多自由度机械臂状态空间输入长度不固定的情况,引入长短记忆神经网络(Long Short Term Memory,LSTM)的环境状态输入处理方法,可以提升算法对环境的适应能力。并且在奖惩稀疏的情况下引入了好奇心机制(Intrinsic Curiosity Module,ICM),通过给予内在奖励鼓励智能体更大程度地探索环境。基于奖惩机制建立智能体,根据钻锚机器人的运动特性定义其状态空间与动作空间,在同一场景下分别使用2种算法对智能体进行训练,综合奖励值、回合步数、Actor网络损失值、Critic网络损失值等指标进行对比分析,最后经过仿真消融实验测试对比。实验结果表明,在原始PPO算法不能完成任务的情况下,改进后的算法路径长度比同样能完成任务的PPO-ICM算法缩短了3.98%,所用时间缩短了25.6%。为进一步验证改进后算法的鲁棒性,设计多组实验,改进后的PPO算法均完成路径规划任务,路径终点与目标位置的距离误差在3.88 cm之内,锚杆与竖直方向夹角误差在3°以内,能够有效完成路径规划任务,提升煤矿巷道支护系统的自动化程度。结果验证了所提方法在煤矿井下巷道支护时锚孔位置多变的情况下钻锚机器人多自由度机械臂在路径规划的可行性与有效性。