Multi-agent technology has been used in many complex distributed and concurrent systems. A railway system is such a safety critical system and careful inves- tigation of the functional components is very important. St...Multi-agent technology has been used in many complex distributed and concurrent systems. A railway system is such a safety critical system and careful inves- tigation of the functional components is very important. Study of the various functional components in communi- cation-based train control (CBTC) system necessitates a good structural design followed by its validation and ver- ification through a formal modelling technique. The work presented here is the follow up of our multi-agent-based CBTC system for Indian railway designed using the methodology for engineering system of software agents. Behavioural analysis of the designed system involves several operating scenarios that arise during train run, and helps in understanding the reaction of the system to such situations. This validation and verification are very important as it allows the system designer to critically evaluate the desired function of the system and to correct the design errors, if any, before its actual implementation. Modelling, validation and verification of the structural design through Coloured petri net (CPN) are central to this paper. Analysis of simulation results validates the efficacy of the design.展开更多
As an important mechanism in multi-agent interaction,communication can make agents form complex team relationships rather than constitute a simple set of multiple independent agents.However,the existing communication ...As an important mechanism in multi-agent interaction,communication can make agents form complex team relationships rather than constitute a simple set of multiple independent agents.However,the existing communication schemes can bring much timing redundancy and irrelevant messages,which seriously affects their practical application.To solve this problem,this paper proposes a targeted multiagent communication algorithm based on state control(SCTC).The SCTC uses a gating mechanism based on state control to reduce the timing redundancy of communication between agents and determines the interaction relationship between agents and the importance weight of a communication message through a series connection of hard-and self-attention mechanisms,realizing targeted communication message processing.In addition,by minimizing the difference between the fusion message generated from a real communication message of each agent and a fusion message generated from the buffered message,the correctness of the final action choice of the agent is ensured.Our evaluation using a challenging set of Star Craft II benchmarks indicates that the SCTC can significantly improve the learning performance and reduce the communication overhead between agents,thus ensuring better cooperation between agents.展开更多
The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deplo...The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deployment and flexible expansion.However,the high-precision control of WCRS requires deterministic wireless communication,which is always challenging in the complex and dynamic radio space.This paper employs the reconfigurable intelligent surface(RIS)to establish a novel RIS-assisted WCRS architecture,where the radio channel is controlled to achieve ultra-reliable,low-delay,and low-jitter communication for high-precision closed-loop motion control.However,control and communication are strongly coupled and should be co-optimized.Fully considering the constraints of control input threshold,control delay deadline,beam phase,antenna power,and information distortion,we establish a stability maximization problem to jointly optimize control input compensation,RIS phase shift,and beamforming.Herein,a new jitter-oriented system stability objective with respect to control error and communication jitter is defined and the closed-form expression of control delay deadline is derived based on the Jensen Inequality and Lyapunov-Krasovskii functional.Due to the time-varying and partial observability of the channel and robot states,we model the problem as a partially observable Markov decision process(POMDP).To solve this complex problem,we propose a multi-agent transfer reinforcement learning algorithm named LSTM-PPO-MATRL,where the LSTM-enhanced proximal policy optimization(PPO)is designed to approximate an optimal solution and the option-guided policy transfer learning is proposed to facilitate the learning process.By centralized training and decentralized execution,LSTM-PPO-MATRL is validated by extensive experiments on MuJoCo tasks for both low-mobility and high-mobility robotic control scenarios.The results demonstrate that LSTM-PPO-MATRL not only realizes high learning efficiency,but also supports low-delay,low-jitter communication for low error control,where 71.9%control accuracy improvement and 68.7%delay jitter reduction are achieved compared to the PPO-MADRL baseline.展开更多
Due to the characteristics of line-of-sight(LoS)communication in unmanned aerial vehicle(UAV)networks,these systems are highly susceptible to eavesdropping and surveillance.To effectively address the security concerns...Due to the characteristics of line-of-sight(LoS)communication in unmanned aerial vehicle(UAV)networks,these systems are highly susceptible to eavesdropping and surveillance.To effectively address the security concerns in UAV communication,covert communication methods have been adopted.This paper explores the joint optimization problem of trajectory and transmission power in a multi-hop UAV relay covert communication system.Considering the communication covertness,power constraints,and trajectory limitations,an algorithm based on multi-agent proximal policy optimization(MAPPO),named covert-MAPPO(C-MAPPO),is proposed.The proposed method leverages the strengths of both optimization algorithms and reinforcement learning to analyze and make joint decisions on the transmission power and flight trajectory strategies for UAVs to achieve cooperation.Simulation results demonstrate that the proposed method can maximize the system throughput while satisfying covertness constraints,and it outperforms benchmark algorithms in terms of system throughput and reward convergence speed.展开更多
This paper studies the problem of jamming decision-making for dynamic multiple communication links in wireless communication networks(WCNs).We propose a novel jamming channel allocation and power decision-making(JCAPD...This paper studies the problem of jamming decision-making for dynamic multiple communication links in wireless communication networks(WCNs).We propose a novel jamming channel allocation and power decision-making(JCAPD)approach based on multi-agent deep reinforcement learning(MADRL).In high-dynamic and multi-target aviation communication environments,the rapid changes in channels make it difficult for sensors to accurately capture instantaneous channel state information.This poses a challenge to make centralized jamming decisions with single-agent deep reinforcement learning(DRL)approaches.In response,we design a distributed multi-agent decision architecture(DMADA).We formulate multi-jammer resource allocation as a multiagent Markov decision process(MDP)and propose a fingerprint-based double deep Q-Network(FBDDQN)algorithm for solving it.Each jammer functions as an agent that interacts with the environment in this framework.Through the design of a reasonable reward and training mechanism,our approach enables jammers to achieve distributed cooperation,significantly improving the jamming success rate while considering jamming power cost,and reducing the transmission rate of links.Our experimental results show the FBDDQN algorithm is superior to the baseline methods.展开更多
Dear Editor,This letter is concerned with the problem of time-varying formation tracking for heterogeneous multi-agent systems(MASs) under directed switching networks. For this purpose, our first step is to present so...Dear Editor,This letter is concerned with the problem of time-varying formation tracking for heterogeneous multi-agent systems(MASs) under directed switching networks. For this purpose, our first step is to present some sufficient conditions for the exponential stability of a particular category of switched systems.展开更多
The cooperative control and stability analysis problems for the multi-agent system with sampled com- munication are investigated. Distributed state feedback controllers are adopted for the cooperation of networked age...The cooperative control and stability analysis problems for the multi-agent system with sampled com- munication are investigated. Distributed state feedback controllers are adopted for the cooperation of networked agents. A theorem in the form of linear matrix inequalities(LMI) is derived to analyze the system stability. An- other theorem in the form of optimization problem subject to LMI constraints is proposed to design the controller, and then the algorithm is presented. The simulation results verify the validity and the effectiveness of the pro- posed approach.展开更多
To address the issues of poor adaptability in resource allocation and low multi-agent cooperation efficiency in Joint Radar and Communication(JRC)systems under dynamic environments,an intelligent optimization framewor...To address the issues of poor adaptability in resource allocation and low multi-agent cooperation efficiency in Joint Radar and Communication(JRC)systems under dynamic environments,an intelligent optimization framework integrating Deep Reinforcement Learning(DRL)and Graph Neural Network(GNN)is proposed.This framework models resource allocation as a Partially Observable Markov Game(POMG),designs a weighted reward function to balance radar and communication efficiencies,adopts the Multi-Agent Proximal Policy Optimization(MAPPO)framework,and integrates Graph Convolutional Networks(GCN)and Graph Sample and Aggregate(Graph-SAGE)to optimize information interaction.Simulations show that,compared with traditional methods and pure DRL methods,the proposed framework achieves improvements in performance metrics such as communication success rate,Average Age of Information(AoI),and policy convergence speed,effectively enabling resource management in complex environments.Moreover,the proposed GNN-DRL-based intelligent optimization framework obtains significantly better performance for resource management in multi-agent JRC systems than traditional methods and pure DRL methods.展开更多
At this historic juncture of deepening technological revolution and industrial transformation,China's communication sector stands on the eve of another great leap forward.Reflecting on the development of communica...At this historic juncture of deepening technological revolution and industrial transformation,China's communication sector stands on the eve of another great leap forward.Reflecting on the development of communications over the past two decades,China has forged an innovative path from catching up to keeping pace and then to leading the way.Today,at the new starting point of 6G development and facing the paradigm shift brought about by“AI+communications,”China's scientific research community,with the courage to venture into uncharted territory,is advancing original theories such as the new communication paradigm based on a unified theoretical framework of information theory to the global forefront.展开更多
Efficient energy utilization in covert communication sustains covertness while assuring communication quality and efficiency.This paper investigates covert communication energy efficiency(EE)in direct uplink satellite...Efficient energy utilization in covert communication sustains covertness while assuring communication quality and efficiency.This paper investigates covert communication energy efficiency(EE)in direct uplink satellite-ground communications,focusing on enhancing system EE via optimized transmit beamforming and satellite orbit altitude selection.This paper first establishes an optimization problem to maximize system EE in a direct uplink satelliteground covert communication scenario.To solve this non-convex optimization problem,it is decomposed into two subproblems and solved using the successive convex approximation(SCA)method.Based on the above methods,this paper proposes an overall iterative optimization algorithm.Simulation results demonstrate that the proposed algorithm surpasses the conventional baseline algorithms in terms of system EE.Furthermore,they elucidate the correlation between the amount of information received by the receiver and the variations in the satellite’s orbital altitude.展开更多
Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused ...Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused by the strong inter-agent coupling nature embedded in an MARL problem,which is yet to be fully exploited in existing algorithms.In this work,we recognize a learning graph characterizing the dependence between individual rewards and individual policies.Then we propose a graph-based reward aggregation(GRA)method,which utilizes the inherent coupling relationship among agents to eliminate redundant information.Specifically,GRA passes information among cooperating agents through graph attention networks to obtain aggregated rewards that contribute to the fitting of the value function,making each agent learn a decentralized executable cooperation policy.In addition,we propose a variant of GRA,named GRA-decen,which achieves decentralized training and decentralized execution(DTDE)when each agent only has access to information of partial agents in the learning process.We conduct experiments in different environments and demonstrate the practicality and scalability of our algorithms.展开更多
This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination me...This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination methods that are solved by neural dynamics,the proposed strategy displays greater flexibility,adaptability and scalability.Furthermore,the proposed AMAC strategy is reconstructed as a time-varying complex-valued matrix equation.By introducing a dynamic error function,a fixed-time convergent zeroing neural network(FTCZNN)model is designed for the online solution of the AMAC strategy,with its convergence time upper bound derived theoretically.Finally,the effectiveness and applicability of the coordination control method are demonstrated by numerical simulations and physical experiments.Numerical results indicate that this method can reduce the formation error to the order of 10^(-6)within 1.8 s.展开更多
This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to es...This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to estimate higher-order synchronization errors,enabling the controller to rely solely on relative output measurements.This approach significantly reduces the dependence on full-state information,which is often infeasible or costly in practical engineering applications.An output feedback control strategy is developed to overcome these limitations while ensuring robust and effective synchronization.Simulation results are provided to demonstrate the effectiveness of the proposed approach and validate the theoretical findings.展开更多
The modern world remains vulnerable to natural disasters,including floods,earthquakes,wildfires,and others.These events remain unpredictable and inevitable,and recovering quickly and effectively requires significant e...The modern world remains vulnerable to natural disasters,including floods,earthquakes,wildfires,and others.These events remain unpredictable and inevitable,and recovering quickly and effectively requires significant effort and expense.Monitoring is becoming more efficient thanks to technologies such as Unmanned Aerial Vehicles(UAVs),which can access hard-to-reach areas and provide real-time data.However,in disaster-affected areas,these monitoring systems may encounter many obstacles when communicating with servers or transmitting monitored data.This paper proposes an adaptive communication model to overcome the challenges faced in disaster-affected areas.A base station is responsible for collecting data(such as images and videos)captured by UAVs performing surveillance within its communication range.This station is typically a tower providing fixed cellular network service.However,in the absence of such a tower,a selected UAV may serve as the station,depending on the situation.If surveillance needs to be performed outside the coverage area,it can continue to communicate via nearby UAVs through cooperative communication.UAVs with internet support,known as the Internet of Flying Things(IoFT),will also be utilized to enhance communication capacity and efficiency.The proposed communication model is validated through experiments,showing superior data transmission performance and higher throughput.Analysis indicates it outperforms traditional systems,even in rural areas,with or without internet access.展开更多
Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and colla...Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and collaboration.However,most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents.Meta-GMVAE,based on Variational Autoencoder(VAE)and set-level variational inference,represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks,increasing adaptability and reducing sample requirements.Inspired by these advancements,we propose a novel Distributed Unsupervised Meta-Learning(DUML)framework based on Meta-GMVAE and a fusion strategy.Furthermore,we present a DUML algorithm based on Gaussian Mixture Model(DUMLGMM),where the parameters of the Gaussian-mixture are solved by an Expectation-Maximization algorithm.Simulations on Omniglot and Mini Image Net datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.展开更多
This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consen...This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consensus under average dwell time switching.Then sufficient conditions are derived to guarantee the positive consensus.The gain matrices of the control protocol are described using a matrix decomposition approach and the corresponding computational complexity is reduced by resorting to linear programming and co-positive Lyapunov functions.Finally,two numerical examples are provided to illustrate the results obtained.展开更多
In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Mu...In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education.展开更多
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier...With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.展开更多
To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on m...To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on multiagent deep deterministic policy gradient(MaDDPG).The ECs are divided into conventional ECs and the electric vehicles(EVs)which are managed by ECs agent(ECA)and EV agent(EVA)to exploit the flexibility of the HDR framework.Thus,the HDR is a tri-layer model determined by five types of agents engaging in competing interaction to maximize their own profits.To address the limitations of mathematical expression and participation scale in the Stackelberg game within the HDR model,a dynamic interaction mechanism is adopted.Moreover,to tackle the HDR involving various entities,the MaDDPG develops multiple agents to simulation the dynamic competing interactions between each subject as well as solve the problem of continuous action control.Furthermore,MaDDPG adopts soft target update and priority experience replay method to ensure stable and effective training,and makes the exploration strategy comprehensive by using exploration noise.Simulation studies are conducted to verify the performance of the MaDDPG with dynamic interaction mechanism in dealing with multilayer multi-agent continuous action control,compared to the double deep Q network(DDQN),deep Q network(DQN)and dueling DQN.Additionally,comparisons among the proposed HDR with the price based DR(PBDR)and incentive based DR(IBDR)are analyzed to investigate the flexibility of the HDR.展开更多
基金The work is a part of project named "'Multi- Agent based Train Operation in Moving Block Setup" funded by Department of Information Technology (DIT), Ministry of Commu- nications and Information Technology, Government of India, vide Grant Number 2(6)/2010-EC dated 21/03/2011.
文摘Multi-agent technology has been used in many complex distributed and concurrent systems. A railway system is such a safety critical system and careful inves- tigation of the functional components is very important. Study of the various functional components in communi- cation-based train control (CBTC) system necessitates a good structural design followed by its validation and ver- ification through a formal modelling technique. The work presented here is the follow up of our multi-agent-based CBTC system for Indian railway designed using the methodology for engineering system of software agents. Behavioural analysis of the designed system involves several operating scenarios that arise during train run, and helps in understanding the reaction of the system to such situations. This validation and verification are very important as it allows the system designer to critically evaluate the desired function of the system and to correct the design errors, if any, before its actual implementation. Modelling, validation and verification of the structural design through Coloured petri net (CPN) are central to this paper. Analysis of simulation results validates the efficacy of the design.
文摘As an important mechanism in multi-agent interaction,communication can make agents form complex team relationships rather than constitute a simple set of multiple independent agents.However,the existing communication schemes can bring much timing redundancy and irrelevant messages,which seriously affects their practical application.To solve this problem,this paper proposes a targeted multiagent communication algorithm based on state control(SCTC).The SCTC uses a gating mechanism based on state control to reduce the timing redundancy of communication between agents and determines the interaction relationship between agents and the importance weight of a communication message through a series connection of hard-and self-attention mechanisms,realizing targeted communication message processing.In addition,by minimizing the difference between the fusion message generated from a real communication message of each agent and a fusion message generated from the buffered message,the correctness of the final action choice of the agent is ensured.Our evaluation using a challenging set of Star Craft II benchmarks indicates that the SCTC can significantly improve the learning performance and reduce the communication overhead between agents,thus ensuring better cooperation between agents.
基金supported in part by the National Natural Science Foundation of China(62522320,92267108,62173322)Liaoning Revitalization Talents Program(XLYC2403062)the Science and Technology Program of Liaoning Province(2023JH3/10200004,2022JH25/10100005)。
文摘The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deployment and flexible expansion.However,the high-precision control of WCRS requires deterministic wireless communication,which is always challenging in the complex and dynamic radio space.This paper employs the reconfigurable intelligent surface(RIS)to establish a novel RIS-assisted WCRS architecture,where the radio channel is controlled to achieve ultra-reliable,low-delay,and low-jitter communication for high-precision closed-loop motion control.However,control and communication are strongly coupled and should be co-optimized.Fully considering the constraints of control input threshold,control delay deadline,beam phase,antenna power,and information distortion,we establish a stability maximization problem to jointly optimize control input compensation,RIS phase shift,and beamforming.Herein,a new jitter-oriented system stability objective with respect to control error and communication jitter is defined and the closed-form expression of control delay deadline is derived based on the Jensen Inequality and Lyapunov-Krasovskii functional.Due to the time-varying and partial observability of the channel and robot states,we model the problem as a partially observable Markov decision process(POMDP).To solve this complex problem,we propose a multi-agent transfer reinforcement learning algorithm named LSTM-PPO-MATRL,where the LSTM-enhanced proximal policy optimization(PPO)is designed to approximate an optimal solution and the option-guided policy transfer learning is proposed to facilitate the learning process.By centralized training and decentralized execution,LSTM-PPO-MATRL is validated by extensive experiments on MuJoCo tasks for both low-mobility and high-mobility robotic control scenarios.The results demonstrate that LSTM-PPO-MATRL not only realizes high learning efficiency,but also supports low-delay,low-jitter communication for low error control,where 71.9%control accuracy improvement and 68.7%delay jitter reduction are achieved compared to the PPO-MADRL baseline.
基金supported by the Natural Science Foundation of Jiangsu Province,China(No.BK20240200)in part by the National Natural Science Foundation of China(Nos.62271501,62071488,62471489 and U22B2002)+1 种基金in part by the Key Technologies R&D Program of Jiangsu,China(Prospective and Key Technologies for Industry)(Nos.BE2023022 and BE2023022-4)in part by the Post-doctoral Fellowship Program of CPSF,China(No.GZB20240996).
文摘Due to the characteristics of line-of-sight(LoS)communication in unmanned aerial vehicle(UAV)networks,these systems are highly susceptible to eavesdropping and surveillance.To effectively address the security concerns in UAV communication,covert communication methods have been adopted.This paper explores the joint optimization problem of trajectory and transmission power in a multi-hop UAV relay covert communication system.Considering the communication covertness,power constraints,and trajectory limitations,an algorithm based on multi-agent proximal policy optimization(MAPPO),named covert-MAPPO(C-MAPPO),is proposed.The proposed method leverages the strengths of both optimization algorithms and reinforcement learning to analyze and make joint decisions on the transmission power and flight trajectory strategies for UAVs to achieve cooperation.Simulation results demonstrate that the proposed method can maximize the system throughput while satisfying covertness constraints,and it outperforms benchmark algorithms in terms of system throughput and reward convergence speed.
基金supported in part by the National Natural Science Foundation of China(No.61906156).
文摘This paper studies the problem of jamming decision-making for dynamic multiple communication links in wireless communication networks(WCNs).We propose a novel jamming channel allocation and power decision-making(JCAPD)approach based on multi-agent deep reinforcement learning(MADRL).In high-dynamic and multi-target aviation communication environments,the rapid changes in channels make it difficult for sensors to accurately capture instantaneous channel state information.This poses a challenge to make centralized jamming decisions with single-agent deep reinforcement learning(DRL)approaches.In response,we design a distributed multi-agent decision architecture(DMADA).We formulate multi-jammer resource allocation as a multiagent Markov decision process(MDP)and propose a fingerprint-based double deep Q-Network(FBDDQN)algorithm for solving it.Each jammer functions as an agent that interacts with the environment in this framework.Through the design of a reasonable reward and training mechanism,our approach enables jammers to achieve distributed cooperation,significantly improving the jamming success rate while considering jamming power cost,and reducing the transmission rate of links.Our experimental results show the FBDDQN algorithm is superior to the baseline methods.
基金supported in part by the National Natural Science Foundation of China(62273255,62350003,62088101)the Shanghai Science and Technology Cooperation Project(22510712000,21550760900)+1 种基金the Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Fundamental Research Funds for the Central Universities
文摘Dear Editor,This letter is concerned with the problem of time-varying formation tracking for heterogeneous multi-agent systems(MASs) under directed switching networks. For this purpose, our first step is to present some sufficient conditions for the exponential stability of a particular category of switched systems.
基金Supported by the National Natural Science Foundation of China(91016017)the National Aviation Found of China(20115868009)~~
文摘The cooperative control and stability analysis problems for the multi-agent system with sampled com- munication are investigated. Distributed state feedback controllers are adopted for the cooperation of networked agents. A theorem in the form of linear matrix inequalities(LMI) is derived to analyze the system stability. An- other theorem in the form of optimization problem subject to LMI constraints is proposed to design the controller, and then the algorithm is presented. The simulation results verify the validity and the effectiveness of the pro- posed approach.
基金funded by Shandong Provincial Natural Science Foundation,grant number ZR2023MF111.
文摘To address the issues of poor adaptability in resource allocation and low multi-agent cooperation efficiency in Joint Radar and Communication(JRC)systems under dynamic environments,an intelligent optimization framework integrating Deep Reinforcement Learning(DRL)and Graph Neural Network(GNN)is proposed.This framework models resource allocation as a Partially Observable Markov Game(POMG),designs a weighted reward function to balance radar and communication efficiencies,adopts the Multi-Agent Proximal Policy Optimization(MAPPO)framework,and integrates Graph Convolutional Networks(GCN)and Graph Sample and Aggregate(Graph-SAGE)to optimize information interaction.Simulations show that,compared with traditional methods and pure DRL methods,the proposed framework achieves improvements in performance metrics such as communication success rate,Average Age of Information(AoI),and policy convergence speed,effectively enabling resource management in complex environments.Moreover,the proposed GNN-DRL-based intelligent optimization framework obtains significantly better performance for resource management in multi-agent JRC systems than traditional methods and pure DRL methods.
文摘At this historic juncture of deepening technological revolution and industrial transformation,China's communication sector stands on the eve of another great leap forward.Reflecting on the development of communications over the past two decades,China has forged an innovative path from catching up to keeping pace and then to leading the way.Today,at the new starting point of 6G development and facing the paradigm shift brought about by“AI+communications,”China's scientific research community,with the courage to venture into uncharted territory,is advancing original theories such as the new communication paradigm based on a unified theoretical framework of information theory to the global forefront.
基金supported in part by the National Natural Science Foundation of China under Grants 62025110,62271093sponsored by Natural Science Foundation of Chongqing,China,under Grant CSTB2023NSCQ-LZX0108.
文摘Efficient energy utilization in covert communication sustains covertness while assuring communication quality and efficiency.This paper investigates covert communication energy efficiency(EE)in direct uplink satellite-ground communications,focusing on enhancing system EE via optimized transmit beamforming and satellite orbit altitude selection.This paper first establishes an optimization problem to maximize system EE in a direct uplink satelliteground covert communication scenario.To solve this non-convex optimization problem,it is decomposed into two subproblems and solved using the successive convex approximation(SCA)method.Based on the above methods,this paper proposes an overall iterative optimization algorithm.Simulation results demonstrate that the proposed algorithm surpasses the conventional baseline algorithms in terms of system EE.Furthermore,they elucidate the correlation between the amount of information received by the receiver and the variations in the satellite’s orbital altitude.
基金supported in part by the National Natural Science Foundation of China(grants 62203073 and 62573068)the Natural Science Foundation of Chongqing,China(grant CSTB2022NSCQMSX0577)。
文摘Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused by the strong inter-agent coupling nature embedded in an MARL problem,which is yet to be fully exploited in existing algorithms.In this work,we recognize a learning graph characterizing the dependence between individual rewards and individual policies.Then we propose a graph-based reward aggregation(GRA)method,which utilizes the inherent coupling relationship among agents to eliminate redundant information.Specifically,GRA passes information among cooperating agents through graph attention networks to obtain aggregated rewards that contribute to the fitting of the value function,making each agent learn a decentralized executable cooperation policy.In addition,we propose a variant of GRA,named GRA-decen,which achieves decentralized training and decentralized execution(DTDE)when each agent only has access to information of partial agents in the learning process.We conduct experiments in different environments and demonstrate the practicality and scalability of our algorithms.
基金supported by the National Natural Science Foundation of China under Grants 61962023,61562029 and 62466019.
文摘This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination methods that are solved by neural dynamics,the proposed strategy displays greater flexibility,adaptability and scalability.Furthermore,the proposed AMAC strategy is reconstructed as a time-varying complex-valued matrix equation.By introducing a dynamic error function,a fixed-time convergent zeroing neural network(FTCZNN)model is designed for the online solution of the AMAC strategy,with its convergence time upper bound derived theoretically.Finally,the effectiveness and applicability of the coordination control method are demonstrated by numerical simulations and physical experiments.Numerical results indicate that this method can reduce the formation error to the order of 10^(-6)within 1.8 s.
文摘This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to estimate higher-order synchronization errors,enabling the controller to rely solely on relative output measurements.This approach significantly reduces the dependence on full-state information,which is often infeasible or costly in practical engineering applications.An output feedback control strategy is developed to overcome these limitations while ensuring robust and effective synchronization.Simulation results are provided to demonstrate the effectiveness of the proposed approach and validate the theoretical findings.
文摘The modern world remains vulnerable to natural disasters,including floods,earthquakes,wildfires,and others.These events remain unpredictable and inevitable,and recovering quickly and effectively requires significant effort and expense.Monitoring is becoming more efficient thanks to technologies such as Unmanned Aerial Vehicles(UAVs),which can access hard-to-reach areas and provide real-time data.However,in disaster-affected areas,these monitoring systems may encounter many obstacles when communicating with servers or transmitting monitored data.This paper proposes an adaptive communication model to overcome the challenges faced in disaster-affected areas.A base station is responsible for collecting data(such as images and videos)captured by UAVs performing surveillance within its communication range.This station is typically a tower providing fixed cellular network service.However,in the absence of such a tower,a selected UAV may serve as the station,depending on the situation.If surveillance needs to be performed outside the coverage area,it can continue to communicate via nearby UAVs through cooperative communication.UAVs with internet support,known as the Internet of Flying Things(IoFT),will also be utilized to enhance communication capacity and efficiency.The proposed communication model is validated through experiments,showing superior data transmission performance and higher throughput.Analysis indicates it outperforms traditional systems,even in rural areas,with or without internet access.
基金supported by the National Natural Science Foundation of China Youth Fund(No.62101579)。
文摘Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and collaboration.However,most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents.Meta-GMVAE,based on Variational Autoencoder(VAE)and set-level variational inference,represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks,increasing adaptability and reducing sample requirements.Inspired by these advancements,we propose a novel Distributed Unsupervised Meta-Learning(DUML)framework based on Meta-GMVAE and a fusion strategy.Furthermore,we present a DUML algorithm based on Gaussian Mixture Model(DUMLGMM),where the parameters of the Gaussian-mixture are solved by an Expectation-Maximization algorithm.Simulations on Omniglot and Mini Image Net datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.
基金supported by the National Natural Science Foundation of China(62463007,62463005)the Natural Science Foundation of Hainan Province(625RC710,625MS047)+1 种基金the System Control and Information Processing Education Ministry Key Laboratory Open Funding,China(Scip20240119)the Science Research Funding of Hainan University,China(KYQD(ZR)22180,KYQD(ZR)23180).
文摘This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consensus under average dwell time switching.Then sufficient conditions are derived to guarantee the positive consensus.The gain matrices of the control protocol are described using a matrix decomposition approach and the corresponding computational complexity is reduced by resorting to linear programming and co-positive Lyapunov functions.Finally,two numerical examples are provided to illustrate the results obtained.
文摘In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education.
文摘With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.
基金supported by the National Natural Science Foundation of China(No.52477097)the GuangDong Basic and Applied Basic Research Foundation(2023A1515240014)the State Key Laboratory of Advanced Electromagnetic Technology(Grant No.AET 2024KF005).
文摘To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on multiagent deep deterministic policy gradient(MaDDPG).The ECs are divided into conventional ECs and the electric vehicles(EVs)which are managed by ECs agent(ECA)and EV agent(EVA)to exploit the flexibility of the HDR framework.Thus,the HDR is a tri-layer model determined by five types of agents engaging in competing interaction to maximize their own profits.To address the limitations of mathematical expression and participation scale in the Stackelberg game within the HDR model,a dynamic interaction mechanism is adopted.Moreover,to tackle the HDR involving various entities,the MaDDPG develops multiple agents to simulation the dynamic competing interactions between each subject as well as solve the problem of continuous action control.Furthermore,MaDDPG adopts soft target update and priority experience replay method to ensure stable and effective training,and makes the exploration strategy comprehensive by using exploration noise.Simulation studies are conducted to verify the performance of the MaDDPG with dynamic interaction mechanism in dealing with multilayer multi-agent continuous action control,compared to the double deep Q network(DDQN),deep Q network(DQN)and dueling DQN.Additionally,comparisons among the proposed HDR with the price based DR(PBDR)and incentive based DR(IBDR)are analyzed to investigate the flexibility of the HDR.