In this study,a solution based on deep Q network(DQN)is proposed to address the relay selection problem in cooperative non-orthogonal multiple access(NOMA)systems.DQN is particularly effective in addressing problems w...In this study,a solution based on deep Q network(DQN)is proposed to address the relay selection problem in cooperative non-orthogonal multiple access(NOMA)systems.DQN is particularly effective in addressing problems within dynamic and complex communication environ-ments.By formulating the relay selection problem as a Markov decision process(MDP),the DQN algorithm employs deep neural networks(DNNs)to learn and make decisions through real-time interactions with the communication environment,aiming to minimize the system’s outage proba-bility.During the learning process,the DQN algorithm progressively acquires channel state infor-mation(CSI)between two nodes,thereby minimizing the system’s outage probability until a sta-ble level is reached.Simulation results show that the proposed method effectively reduces the out-age probability by 82%compared to the two-way relay selection scheme(Two-Way)when the sig-nal-to-noise ratio(SNR)is 30 dB.This study demonstrates the applicability and advantages of the DQN algorithm in cooperative NOMA systems,providing a novel approach to addressing real-time relay selection challenges in dynamic communication environments.展开更多
A gait control method for a biped robot based on the deep Q-network (DQN) algorithm is proposed to enhance the stability of walking on uneven ground. This control strategy is an intelligent learning method of posture ...A gait control method for a biped robot based on the deep Q-network (DQN) algorithm is proposed to enhance the stability of walking on uneven ground. This control strategy is an intelligent learning method of posture adjustment. A robot is taken as an agent and trained to walk steadily on an uneven surface with obstacles, using a simple reward function based on forward progress. The reward-punishment (RP) mechanism of the DQN algorithm is established after obtaining the offline gait which was generated in advance foot trajectory planning. Instead of implementing a complex dynamic model, the proposed method enables the biped robot to learn to adjust its posture on the uneven ground and ensures walking stability. The performance and effectiveness of the proposed algorithm was validated in the V-REP simulation environment. The results demonstrate that the biped robot's lateral tile angle is less than 3° after implementing the proposed method and the walking stability is obviously improved.展开更多
针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值...针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值函数网络分别用于表示先后两轮的值函数,协同更新网络参数,以提高DQN算法中值函数估计的稳定性。基于Open AI Gym平台的实验结果表明,在解决Mountain Car和Cart Pole问题方面,该算法较经典DQN算法具有更好的收敛稳定性。展开更多
工业应用中,动态多变的流式数据特性使强化学习算法在训练过程中很难在模型收敛性与知识遗忘之间实现很好的平衡。考虑工业现场内容请求与当前生产任务具有高度相关性,提出一种基于集成深度Q网络算法(Integrated Deep Q-Network,IDQN)...工业应用中,动态多变的流式数据特性使强化学习算法在训练过程中很难在模型收敛性与知识遗忘之间实现很好的平衡。考虑工业现场内容请求与当前生产任务具有高度相关性,提出一种基于集成深度Q网络算法(Integrated Deep Q-Network,IDQN)的自适应缓存策略。算法在离线阶段利用不同历史任务数据,训练并保存多个历史任务模型。在线阶段每当检测到实时数据流的任务特征发生变化,则重新训练网络模型。如果实时数据流的特征隶属于历史任务,则向深度Q网络(Deep Q-Network,DQN)导入相应的历史任务模型进行网络训练。否则直接利用实时数据流训练并标记为新的任务模型。仿真实验结果表明,IDQN与参考算法相比,在内容请求流行度动态变化时能够有效减少模型收敛时间,提高缓存效率。展开更多
基金supported by the National Natural Science Foundation of China(Nos.61841107 and 62061024)Gansu Natural Sci-ence Foundation(Nos.22JR5RA274 and 23YFGA0062)Gansu Innovation Foundation(No.2022A-215).
文摘In this study,a solution based on deep Q network(DQN)is proposed to address the relay selection problem in cooperative non-orthogonal multiple access(NOMA)systems.DQN is particularly effective in addressing problems within dynamic and complex communication environ-ments.By formulating the relay selection problem as a Markov decision process(MDP),the DQN algorithm employs deep neural networks(DNNs)to learn and make decisions through real-time interactions with the communication environment,aiming to minimize the system’s outage proba-bility.During the learning process,the DQN algorithm progressively acquires channel state infor-mation(CSI)between two nodes,thereby minimizing the system’s outage probability until a sta-ble level is reached.Simulation results show that the proposed method effectively reduces the out-age probability by 82%compared to the two-way relay selection scheme(Two-Way)when the sig-nal-to-noise ratio(SNR)is 30 dB.This study demonstrates the applicability and advantages of the DQN algorithm in cooperative NOMA systems,providing a novel approach to addressing real-time relay selection challenges in dynamic communication environments.
基金Supported by the National Ministries and Research Funds(3020020221111)
文摘A gait control method for a biped robot based on the deep Q-network (DQN) algorithm is proposed to enhance the stability of walking on uneven ground. This control strategy is an intelligent learning method of posture adjustment. A robot is taken as an agent and trained to walk steadily on an uneven surface with obstacles, using a simple reward function based on forward progress. The reward-punishment (RP) mechanism of the DQN algorithm is established after obtaining the offline gait which was generated in advance foot trajectory planning. Instead of implementing a complex dynamic model, the proposed method enables the biped robot to learn to adjust its posture on the uneven ground and ensures walking stability. The performance and effectiveness of the proposed algorithm was validated in the V-REP simulation environment. The results demonstrate that the biped robot's lateral tile angle is less than 3° after implementing the proposed method and the walking stability is obviously improved.
文摘针对深度Q网络(DQN)算法因过估计导致收敛稳定性差的问题,在传统时序差分(TD)的基础上提出N阶TD误差的概念,设计基于二阶TD误差的双网络DQN算法。构造基于二阶TD误差的值函数更新公式,同时结合DQN算法建立双网络模型,得到两个同构的值函数网络分别用于表示先后两轮的值函数,协同更新网络参数,以提高DQN算法中值函数估计的稳定性。基于Open AI Gym平台的实验结果表明,在解决Mountain Car和Cart Pole问题方面,该算法较经典DQN算法具有更好的收敛稳定性。
文摘工业应用中,动态多变的流式数据特性使强化学习算法在训练过程中很难在模型收敛性与知识遗忘之间实现很好的平衡。考虑工业现场内容请求与当前生产任务具有高度相关性,提出一种基于集成深度Q网络算法(Integrated Deep Q-Network,IDQN)的自适应缓存策略。算法在离线阶段利用不同历史任务数据,训练并保存多个历史任务模型。在线阶段每当检测到实时数据流的任务特征发生变化,则重新训练网络模型。如果实时数据流的特征隶属于历史任务,则向深度Q网络(Deep Q-Network,DQN)导入相应的历史任务模型进行网络训练。否则直接利用实时数据流训练并标记为新的任务模型。仿真实验结果表明,IDQN与参考算法相比,在内容请求流行度动态变化时能够有效减少模型收敛时间,提高缓存效率。