为保证机床混流装配车间生产的机床准时交付,提出一种基于改进的深度多智能体强化学习的机床混流装配线调度优化方法,以解决最小延迟生产调度优化模型求解质量低、训练速度缓慢问题,构建以最小延迟时间目标的混流装配线调度优化模型,应...为保证机床混流装配车间生产的机床准时交付,提出一种基于改进的深度多智能体强化学习的机床混流装配线调度优化方法,以解决最小延迟生产调度优化模型求解质量低、训练速度缓慢问题,构建以最小延迟时间目标的混流装配线调度优化模型,应用去中心化分散执行的双重深度Q网络(double deep Q network,DDQN)的智能体来学习生产信息与调度目标的关系。该框架采用集中训练与分散执行的策略,并使用参数共享技术,能处理多智能体强化学习中的非稳态问题。在此基础上,采用递归神经网络来管理可变长度的状态和行动表示,使智能体具有处理任意规模问题的能力。同时引入全局/局部奖励函数,以解决训练过程中的奖励稀疏问题。通过消融实验,确定了最优的参数组合。数值实验结果表明,与标准测试方案相比,本算法在目标达成度方面,平均总延迟工件数较改善前提升了24.1%~32.3%,训练速度提高了8.3%。展开更多
船舶在自动靠泊过程中会受到风、浪、流和岸壁效应等因素的影响,故需要精确的路径规划方法防止靠泊失败。针对全驱动船舶靠泊过程的基于双深度Q网络(double deep Q network,DDQN)算法,设计了一种船舶自动靠泊路径规划方法。首先建立船...船舶在自动靠泊过程中会受到风、浪、流和岸壁效应等因素的影响,故需要精确的路径规划方法防止靠泊失败。针对全驱动船舶靠泊过程的基于双深度Q网络(double deep Q network,DDQN)算法,设计了一种船舶自动靠泊路径规划方法。首先建立船舶三自由度模型,然后通过将距离、航向、推力、时间和碰撞作为奖励或惩罚,改进奖励函数。随后引入DDQN来学习动作奖励模型,并使用学习结果来操纵船舶运动。通过追求更高的奖励值,船舶可以自行找到最优的靠泊路径。实验结果表明,在不同水流速度下,船舶都可以在完成靠泊的同时减小时间和推力,并且在相同水流速度下,DDQN算法与Q-learning、SARSA(state action reward state action)、深度Q网络(deep Q network,DQN)等算法相比,靠泊过程推力分别减小了241.940、234.614、80.202 N,且时间仅为252.485 s。展开更多
针对传统固定发射策略的主动声呐在水声信道中面临环境适配性不足,导致探测稳定性差的问题,本文提出一种基于多智能体强化学习的主动声呐发射波形与声源级的联合优化方法。采用多智能体协作学习方法,将发射波形优化与声源级优化解耦为...针对传统固定发射策略的主动声呐在水声信道中面临环境适配性不足,导致探测稳定性差的问题,本文提出一种基于多智能体强化学习的主动声呐发射波形与声源级的联合优化方法。采用多智能体协作学习方法,将发射波形优化与声源级优化解耦为多个智能体任务。引入奖励塑形方法,抑制多峰信道频谱引起的奖励信号噪声,提升智能体寻优能力,并避免子脉冲频点冲突。此外,使用双深度Q网络(double deep q-network),降低智能体Q值估计偏差并提升决策稳定性。在基于南海实测声速梯度重构的典型深海信道场景下进行了数值验证,结果表明:经所提算法优化后的信道适配度与回波信噪比调控准确性均优于对比算法,为构建具备环境自适应能力的智能主动声呐系统提供了一种可行的技术途径。展开更多
现代战争中的空战态势复杂多变,因此探索一种快速有效的决策方法十分重要。本文对多架无人机协同对抗问题展开研究,提出一种基于长短期记忆(Long and short-term memory,LSTM)和多智能体深度确定策略梯度(Multi-agent deep deterministi...现代战争中的空战态势复杂多变,因此探索一种快速有效的决策方法十分重要。本文对多架无人机协同对抗问题展开研究,提出一种基于长短期记忆(Long and short-term memory,LSTM)和多智能体深度确定策略梯度(Multi-agent deep deterministic policy gradient,MADDPG)的多机协同超视距空战决策算法。首先,建立无人机运动模型、雷达探测区模型和导弹攻击区模型。然后,提出了多机协同超视距空战决策算法。设计了集中式训练LSTM-MADDPG分布式执行架构和协同空战系统的状态空间来处理多架无人机之间的同步决策问题;设计了学习率衰减机制来提升网络的收敛速度和稳定性;利用LSTM网络改进了网络结构,增强了网络对战术特征的提取能力;利用基于衰减因子的奖励函数机制加强无人机的协同对抗能力。仿真结果表明所提出的多机协同超视距空战决策算法使无人机具备了协同攻防的能力,同时算法具备良好的稳定性和收敛性。展开更多
A network selection optimization algorithm based on the Markov decision process(MDP)is proposed so that mobile terminals can always connect to the best wireless network in a heterogeneous network environment.Consideri...A network selection optimization algorithm based on the Markov decision process(MDP)is proposed so that mobile terminals can always connect to the best wireless network in a heterogeneous network environment.Considering the different types of service requirements,the MDP model and its reward function are constructed based on the quality of service(QoS)attribute parameters of the mobile users,and the network attribute weights are calculated by using the analytic hierarchy process(AHP).The network handoff decision condition is designed according to the different types of user services and the time-varying characteristics of the network,and the MDP model is solved by using the genetic algorithm and simulated annealing(GA-SA),thus,users can seamlessly switch to the network with the best long-term expected reward value.Simulation results show that the proposed algorithm has good convergence performance,and can guarantee that users with different service types will obtain satisfactory expected total reward values and have low numbers of network handoffs.展开更多
This paper investigates the networked evolutionary model based on snow-drift game with the strategy of rewards and penalty. Firstly, by using the semi-tensor product of matrices approach, the mathematical model of the...This paper investigates the networked evolutionary model based on snow-drift game with the strategy of rewards and penalty. Firstly, by using the semi-tensor product of matrices approach, the mathematical model of the networked evolutionary game is built. Secondly, combined with the matrix expression of logic, the mathematical model is expressed as a dynamic logical system and next converted into its evolutionary dynamic algebraic form. Thirdly, the dynamic evolution process is analyzed and the final level of cooperation is discussed. Finally, the effects of the changes in the rewarding and penalty factors on the level of cooperation in the model are studied separately, and the conclusions are verified by examples.展开更多
文摘为保证机床混流装配车间生产的机床准时交付,提出一种基于改进的深度多智能体强化学习的机床混流装配线调度优化方法,以解决最小延迟生产调度优化模型求解质量低、训练速度缓慢问题,构建以最小延迟时间目标的混流装配线调度优化模型,应用去中心化分散执行的双重深度Q网络(double deep Q network,DDQN)的智能体来学习生产信息与调度目标的关系。该框架采用集中训练与分散执行的策略,并使用参数共享技术,能处理多智能体强化学习中的非稳态问题。在此基础上,采用递归神经网络来管理可变长度的状态和行动表示,使智能体具有处理任意规模问题的能力。同时引入全局/局部奖励函数,以解决训练过程中的奖励稀疏问题。通过消融实验,确定了最优的参数组合。数值实验结果表明,与标准测试方案相比,本算法在目标达成度方面,平均总延迟工件数较改善前提升了24.1%~32.3%,训练速度提高了8.3%。
文摘针对传统固定发射策略的主动声呐在水声信道中面临环境适配性不足,导致探测稳定性差的问题,本文提出一种基于多智能体强化学习的主动声呐发射波形与声源级的联合优化方法。采用多智能体协作学习方法,将发射波形优化与声源级优化解耦为多个智能体任务。引入奖励塑形方法,抑制多峰信道频谱引起的奖励信号噪声,提升智能体寻优能力,并避免子脉冲频点冲突。此外,使用双深度Q网络(double deep q-network),降低智能体Q值估计偏差并提升决策稳定性。在基于南海实测声速梯度重构的典型深海信道场景下进行了数值验证,结果表明:经所提算法优化后的信道适配度与回波信噪比调控准确性均优于对比算法,为构建具备环境自适应能力的智能主动声呐系统提供了一种可行的技术途径。
文摘现代战争中的空战态势复杂多变,因此探索一种快速有效的决策方法十分重要。本文对多架无人机协同对抗问题展开研究,提出一种基于长短期记忆(Long and short-term memory,LSTM)和多智能体深度确定策略梯度(Multi-agent deep deterministic policy gradient,MADDPG)的多机协同超视距空战决策算法。首先,建立无人机运动模型、雷达探测区模型和导弹攻击区模型。然后,提出了多机协同超视距空战决策算法。设计了集中式训练LSTM-MADDPG分布式执行架构和协同空战系统的状态空间来处理多架无人机之间的同步决策问题;设计了学习率衰减机制来提升网络的收敛速度和稳定性;利用LSTM网络改进了网络结构,增强了网络对战术特征的提取能力;利用基于衰减因子的奖励函数机制加强无人机的协同对抗能力。仿真结果表明所提出的多机协同超视距空战决策算法使无人机具备了协同攻防的能力,同时算法具备良好的稳定性和收敛性。
基金partially supported by Nation Science Foundation of China (61661025, 61661026)Foundation of A hundred Youth Talents Training Program of Lanzhou Jiaotong University (152022)
文摘A network selection optimization algorithm based on the Markov decision process(MDP)is proposed so that mobile terminals can always connect to the best wireless network in a heterogeneous network environment.Considering the different types of service requirements,the MDP model and its reward function are constructed based on the quality of service(QoS)attribute parameters of the mobile users,and the network attribute weights are calculated by using the analytic hierarchy process(AHP).The network handoff decision condition is designed according to the different types of user services and the time-varying characteristics of the network,and the MDP model is solved by using the genetic algorithm and simulated annealing(GA-SA),thus,users can seamlessly switch to the network with the best long-term expected reward value.Simulation results show that the proposed algorithm has good convergence performance,and can guarantee that users with different service types will obtain satisfactory expected total reward values and have low numbers of network handoffs.
文摘This paper investigates the networked evolutionary model based on snow-drift game with the strategy of rewards and penalty. Firstly, by using the semi-tensor product of matrices approach, the mathematical model of the networked evolutionary game is built. Secondly, combined with the matrix expression of logic, the mathematical model is expressed as a dynamic logical system and next converted into its evolutionary dynamic algebraic form. Thirdly, the dynamic evolution process is analyzed and the final level of cooperation is discussed. Finally, the effects of the changes in the rewarding and penalty factors on the level of cooperation in the model are studied separately, and the conclusions are verified by examples.