期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
RDHNet:addressing rotational and permutational symmetries in continuous multi-agent systems
1
作者 Dongzi WANG Lilan HUANG +3 位作者 muning wen Yuanxi PENG Minglong LI Teng LI 《Frontiers of Computer Science》 2025年第11期39-50,共12页
Symmetry is prevalent in multi-agent systems.The presence of symmetry,coupled with the misuse of absolute coordinate systems,often leads to a large amount of redundant representation space,significantly increasing the... Symmetry is prevalent in multi-agent systems.The presence of symmetry,coupled with the misuse of absolute coordinate systems,often leads to a large amount of redundant representation space,significantly increasing the search space for learning policies and reducing learning efficiency.Effectively utilizing symmetry and extracting symmetryinvariant representations can significantly enhance multi-agent systems’learning efficiency and overall performance by compressing the model’s hypothesis space and improving sample efficiency.The issue of rotational symmetry in multiagent reinforcement learning has received little attention in previous research and is the primary focus of this paper.To address this issue,we propose a rotation-invariant network architecture for continuous action space tasks.This architecture utilizes relative coordinates between agents,eliminating dependence on absolute coordinate systems,and employs a hypernetwork to enhance the model’s fitting capability,enabling it to model MDPs with more complex dynamics.It can be used for both predicting actions and evaluating action values/utilities.In benchmark tasks,experimental results validate the impact of rotational symmetry on multi-agent decision systems and demonstrate the effectiveness of our method.The code of RDHNet has been available at the website of github.com/wang88256187/RDHNet. 展开更多
关键词 MULTI-AGENT reinforcement learning SYMMETRY
原文传递
Offline Pre-trained Multi-agent Decision Transformer 被引量:4
2
作者 Linghui Meng muning wen +8 位作者 Chenyang Le Xiyun Li Dengpeng Xing Weinan Zhang Ying wen Haifeng Zhang Jun Wang Yaodong Yang Bo Xu 《Machine Intelligence Research》 EI CSCD 2023年第2期233-248,共16页
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement... Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL. 展开更多
关键词 Pre-training model multi-agent reinforcement learning(MARL) decision making TRANSFORMER offline reinforcement learning
原文传递
Large sequence models for sequential decision-making:a survey 被引量:1
3
作者 muning wen Runji LIN +6 位作者 Hanjing WANG Yaodong YANG Ying wen Luo MAI Jun WANG Haifeng ZHANG Weinan ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第6期25-42,共18页
Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision,e.g.,GPT-3 and Swin Transformer.Alt... Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision,e.g.,GPT-3 and Swin Transformer.Although originally designed for prediction problems,it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems,which are typically beset by long-standing issues involving sample efficiency,credit assignment,and partial observability.In recent years,sequence models,especially the Transformer,have attracted increasing interest in the RL communities,spawning numerous approaches with notable effectiveness and generalizability.This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer,by discussing the connection between sequential decision-making and sequence modeling,and categorizing them based on the way they utilize the Transformer.Moreover,this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making,encompassing theoretical foundations,network architectures,algorithms,and efficient training systems. 展开更多
关键词 SEQUENTIAL DECISION-MAKING SEQUENCE modeling the Transformer TRAINING system
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部