Reinforcement Learning(RL)serves as a fundamental learning paradigm in the field of artificial intelligence,enabling decision-making policies through interactions with environments.However,traditional RL methods encou...Reinforcement Learning(RL)serves as a fundamental learning paradigm in the field of artificial intelligence,enabling decision-making policies through interactions with environments.However,traditional RL methods encounter challenges when dealing with large-scale or continuous state spaces due to the curse of dimensionality.Although Deep Reinforcement Learning(DRL)can handle complex environments,its lack of transparency and interpretability hinders its applicability due to the black box nature.Moreover,centralized data collection and processing methods pose privacy security risks.Federated learning offers a distributed approach that ensures privacy preservation while co-training models.However,existing federated reinforcement learning approaches have not adequately addressed communication and computation overhead issues.To address these challenges,this study proposes a tensor train decomposition-based federated reinforcement learning method that enhances efficiency and provides interpretability.By leveraging tensor to model state-action values and employing tensor decomposition techniques for dimensionality reduction,this method effectively reduces model parameters and communication overhead while maintaining strong interpretability,accelerates algorithm convergence speed.Experimental results validate the advantages of our proposed algorithm in terms of efficiency and reliability.展开更多
基金supported by the National Natural Science Foundation of China(Nos.U23A20300 and 62207033)the Fundamental Research Funds for the Central Universities of South-Central Minzu University(No.CSZ23013).
文摘Reinforcement Learning(RL)serves as a fundamental learning paradigm in the field of artificial intelligence,enabling decision-making policies through interactions with environments.However,traditional RL methods encounter challenges when dealing with large-scale or continuous state spaces due to the curse of dimensionality.Although Deep Reinforcement Learning(DRL)can handle complex environments,its lack of transparency and interpretability hinders its applicability due to the black box nature.Moreover,centralized data collection and processing methods pose privacy security risks.Federated learning offers a distributed approach that ensures privacy preservation while co-training models.However,existing federated reinforcement learning approaches have not adequately addressed communication and computation overhead issues.To address these challenges,this study proposes a tensor train decomposition-based federated reinforcement learning method that enhances efficiency and provides interpretability.By leveraging tensor to model state-action values and employing tensor decomposition techniques for dimensionality reduction,this method effectively reduces model parameters and communication overhead while maintaining strong interpretability,accelerates algorithm convergence speed.Experimental results validate the advantages of our proposed algorithm in terms of efficiency and reliability.