In response to the complex and multidimensional nature of converged traffic on heterogeneous links in tactical communication networks,which leads to the difficulty in ensuring the quality of service(QoS)requirements f...In response to the complex and multidimensional nature of converged traffic on heterogeneous links in tactical communication networks,which leads to the difficulty in ensuring the quality of service(QoS)requirements for critical services,a frame generation algorithm for differentiated services(DS-FG)is proposed.DS-FG deploys an adaptive frame generation algorithm based on deep reinforcement learning(DRL-FG)for timesensitive service,while deploying a high efficient frame generation(HEFG)algorithm for non-time-sensitive service.DRL-FG constructs a reward function by combining the queue status information of time-sensitive service and utilizes deep deterministic policy gradients(DDPG)to train a decision model for adaptive frame generation(AFG)algorithm thresholds.Furthermore,Gaussian noise sampling and prioritized experience replay strategies are employed to enhance model training efficiency and performance,achieving optimal matching between time-sensitive service QoS requirements and frame generation thresholds.Experimental results demonstrate that DS-FG outperforms traditional algorithms,achieving up to 13%improvement in throughput and over 19.7%reduction in average queueing delay for time-sensitive service.展开更多
基金supported by the National Natural Science Foundation of China under Grant 61931004the Key Laboratory of Intelligent Support Technology for Complex Environments,Ministry of Education under Grant B2202401.
文摘In response to the complex and multidimensional nature of converged traffic on heterogeneous links in tactical communication networks,which leads to the difficulty in ensuring the quality of service(QoS)requirements for critical services,a frame generation algorithm for differentiated services(DS-FG)is proposed.DS-FG deploys an adaptive frame generation algorithm based on deep reinforcement learning(DRL-FG)for timesensitive service,while deploying a high efficient frame generation(HEFG)algorithm for non-time-sensitive service.DRL-FG constructs a reward function by combining the queue status information of time-sensitive service and utilizes deep deterministic policy gradients(DDPG)to train a decision model for adaptive frame generation(AFG)algorithm thresholds.Furthermore,Gaussian noise sampling and prioritized experience replay strategies are employed to enhance model training efficiency and performance,achieving optimal matching between time-sensitive service QoS requirements and frame generation thresholds.Experimental results demonstrate that DS-FG outperforms traditional algorithms,achieving up to 13%improvement in throughput and over 19.7%reduction in average queueing delay for time-sensitive service.