Among the promising application of autonomous surface vessels(ASVs)is the utilization of multiple autonomous tugs for manipulating a floating object such as an oil platform,a broken ship,or a ship in port areas.Consid...Among the promising application of autonomous surface vessels(ASVs)is the utilization of multiple autonomous tugs for manipulating a floating object such as an oil platform,a broken ship,or a ship in port areas.Considering the real conditions and operations of maritime practice,this paper proposes a multi-agent control algorithm to manipulate a ship to a desired position with a desired heading and velocity under the environmental disturbances.The control architecture consists of a supervisory controller in the higher layer and tug controllers in the lower layer.The supervisory controller allocates the towing forces and angles between the tugs and the ship by minimizing the error in the position and velocity of the ship.The weight coefficients in the cost function are designed to be adaptive to guarantee that the towing system functions well under environmental disturbances,and to enhance the efficiency of the towing system.The tug controller provides the forces to tow the ship and tracks the reference trajectory that is computed online based on the towing angles calculated by the supervisory controller.Simulation results show that the proposed algorithm can make the two autonomous tugs cooperatively tow a ship to a desired position with a desired heading and velocity under the(even harsh)environmental disturbances.展开更多
The high redundancy actuator(HRA)concept is a novel approach to fault tolerant actuation that uses a high number of small actuation elements,assembled in series and parallel in order to form a single actuator which ha...The high redundancy actuator(HRA)concept is a novel approach to fault tolerant actuation that uses a high number of small actuation elements,assembled in series and parallel in order to form a single actuator which has intrinsic fault tolerance.Whilst this structure afords resilience under passive control methods alone,active control approaches are likely to provide higher levels of performance.A multiple-model control scheme for an HRA applied through the framework of multi-agent control is presented here.The application of this approach to a 10×10 HRA is discussed and consideration of reconfguration delays and fault detection errors are made.The example shows that multi-agent control can provide tangible performance improvements and increase fault tolerance in comparison to a passive fault tolerant approach.Reconfguration delays are shown to be tolerable,and a strategy for handling false fault detections is detailed.展开更多
This paper investigates the consensus tracking control problem for high order nonlinear multi-agent systems subject to non-affine faults,partial measurable states,uncertain control coefficients,and unknown external di...This paper investigates the consensus tracking control problem for high order nonlinear multi-agent systems subject to non-affine faults,partial measurable states,uncertain control coefficients,and unknown external disturbances.Under the directed topology conditions,an observer-based finite-time control strategy based on adaptive backstepping and is proposed,in which a neural network-based state observer is employed to approximate the unmeasurable system state variables.To address the complexity explosion problem associated with the backstepping method,a finite-time command filter is incorporated,with error compensation signals designed to mitigate the filter-induced errors.Additionally,the Butterworth low-pass filter is introduced to avoid the algebraic ring problem in the design of the controller.The finite-time stability of the closed-loop system is rigorously analyzed with the finite-time Lyapunov stability criterion,validating that all closed-loop signals of the system remain bounded within a finite time.Finally,the effectiveness of the proposed control strategy is verified through a simulation example.展开更多
This paper is concerned with adaptive consensus tracking control of nonlinear multi-agent systems with actuator faults and unknown nonidentical control directions under double semi-Markovian switching topologies.Consi...This paper is concerned with adaptive consensus tracking control of nonlinear multi-agent systems with actuator faults and unknown nonidentical control directions under double semi-Markovian switching topologies.Considering the complex working environment and the stability differences in communication links between leaders and followers,a double semi-Markov process is first introduced to describe the random switching of communication topologies in the leader-follower structure.In order to address challenges from the unknown nonidentical control directions and partial loss of effectiveness actuator faults,a completely independent parameter is introduced into the Nussbaum function to overcome the inherent obstacle of mutual cancellation and avoid the rapid growth rate.Considering only the state information of agents is transmitted among the agents,an adaptive distributed fault-tolerant consensus tracking control is proposed based on the double semi-Markovian switching topologies using the designed Nussbaum function.Furthermore,the stability of the closed-loop nonlinear multi-agent systems is analyzed using contradiction argument and Lyapunov theorem,from which the asymptotic consensus tracking in mean square sense can be obtained.A numerical simulation example is provided to verify the effectiveness of the proposed algorithm.展开更多
The bipartite containment control problem for heterogeneous nonlinear multi-agent systems(HNMASs)within multi-group networks under signed digraphs is investigated,where the first-order and second-order nonlinear dynam...The bipartite containment control problem for heterogeneous nonlinear multi-agent systems(HNMASs)within multi-group networks under signed digraphs is investigated,where the first-order and second-order nonlinear dynamic agents belong to distinct groups.Interactions are cooperative-antagonistic within each group and sign-in-degree balanced across the inter-groups.Firstly,a state feedback control protocol is designed to ensure that the trajectories of followers in diverse groups can converge to distinct convex hulls formed by their corresponding leaders,respectively.As an extension,the bipartite control problem with time-variant formation for the multi-agent system(MAS)is also considered,and a corresponding control protocol with formation compensation vectors is given.Finally,in view of Lyapunov stability theory and matrix inequality,the sufficient conditions for realizing the bipartite containment control are obtained,and several simulations are provided to verify the validity of the above methods.展开更多
The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deplo...The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deployment and flexible expansion.However,the high-precision control of WCRS requires deterministic wireless communication,which is always challenging in the complex and dynamic radio space.This paper employs the reconfigurable intelligent surface(RIS)to establish a novel RIS-assisted WCRS architecture,where the radio channel is controlled to achieve ultra-reliable,low-delay,and low-jitter communication for high-precision closed-loop motion control.However,control and communication are strongly coupled and should be co-optimized.Fully considering the constraints of control input threshold,control delay deadline,beam phase,antenna power,and information distortion,we establish a stability maximization problem to jointly optimize control input compensation,RIS phase shift,and beamforming.Herein,a new jitter-oriented system stability objective with respect to control error and communication jitter is defined and the closed-form expression of control delay deadline is derived based on the Jensen Inequality and Lyapunov-Krasovskii functional.Due to the time-varying and partial observability of the channel and robot states,we model the problem as a partially observable Markov decision process(POMDP).To solve this complex problem,we propose a multi-agent transfer reinforcement learning algorithm named LSTM-PPO-MATRL,where the LSTM-enhanced proximal policy optimization(PPO)is designed to approximate an optimal solution and the option-guided policy transfer learning is proposed to facilitate the learning process.By centralized training and decentralized execution,LSTM-PPO-MATRL is validated by extensive experiments on MuJoCo tasks for both low-mobility and high-mobility robotic control scenarios.The results demonstrate that LSTM-PPO-MATRL not only realizes high learning efficiency,but also supports low-delay,low-jitter communication for low error control,where 71.9%control accuracy improvement and 68.7%delay jitter reduction are achieved compared to the PPO-MADRL baseline.展开更多
To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on m...To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on multiagent deep deterministic policy gradient(MaDDPG).The ECs are divided into conventional ECs and the electric vehicles(EVs)which are managed by ECs agent(ECA)and EV agent(EVA)to exploit the flexibility of the HDR framework.Thus,the HDR is a tri-layer model determined by five types of agents engaging in competing interaction to maximize their own profits.To address the limitations of mathematical expression and participation scale in the Stackelberg game within the HDR model,a dynamic interaction mechanism is adopted.Moreover,to tackle the HDR involving various entities,the MaDDPG develops multiple agents to simulation the dynamic competing interactions between each subject as well as solve the problem of continuous action control.Furthermore,MaDDPG adopts soft target update and priority experience replay method to ensure stable and effective training,and makes the exploration strategy comprehensive by using exploration noise.Simulation studies are conducted to verify the performance of the MaDDPG with dynamic interaction mechanism in dealing with multilayer multi-agent continuous action control,compared to the double deep Q network(DDQN),deep Q network(DQN)and dueling DQN.Additionally,comparisons among the proposed HDR with the price based DR(PBDR)and incentive based DR(IBDR)are analyzed to investigate the flexibility of the HDR.展开更多
Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused ...Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused by the strong inter-agent coupling nature embedded in an MARL problem,which is yet to be fully exploited in existing algorithms.In this work,we recognize a learning graph characterizing the dependence between individual rewards and individual policies.Then we propose a graph-based reward aggregation(GRA)method,which utilizes the inherent coupling relationship among agents to eliminate redundant information.Specifically,GRA passes information among cooperating agents through graph attention networks to obtain aggregated rewards that contribute to the fitting of the value function,making each agent learn a decentralized executable cooperation policy.In addition,we propose a variant of GRA,named GRA-decen,which achieves decentralized training and decentralized execution(DTDE)when each agent only has access to information of partial agents in the learning process.We conduct experiments in different environments and demonstrate the practicality and scalability of our algorithms.展开更多
This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination me...This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination methods that are solved by neural dynamics,the proposed strategy displays greater flexibility,adaptability and scalability.Furthermore,the proposed AMAC strategy is reconstructed as a time-varying complex-valued matrix equation.By introducing a dynamic error function,a fixed-time convergent zeroing neural network(FTCZNN)model is designed for the online solution of the AMAC strategy,with its convergence time upper bound derived theoretically.Finally,the effectiveness and applicability of the coordination control method are demonstrated by numerical simulations and physical experiments.Numerical results indicate that this method can reduce the formation error to the order of 10^(-6)within 1.8 s.展开更多
This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to es...This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to estimate higher-order synchronization errors,enabling the controller to rely solely on relative output measurements.This approach significantly reduces the dependence on full-state information,which is often infeasible or costly in practical engineering applications.An output feedback control strategy is developed to overcome these limitations while ensuring robust and effective synchronization.Simulation results are provided to demonstrate the effectiveness of the proposed approach and validate the theoretical findings.展开更多
Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and colla...Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and collaboration.However,most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents.Meta-GMVAE,based on Variational Autoencoder(VAE)and set-level variational inference,represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks,increasing adaptability and reducing sample requirements.Inspired by these advancements,we propose a novel Distributed Unsupervised Meta-Learning(DUML)framework based on Meta-GMVAE and a fusion strategy.Furthermore,we present a DUML algorithm based on Gaussian Mixture Model(DUMLGMM),where the parameters of the Gaussian-mixture are solved by an Expectation-Maximization algorithm.Simulations on Omniglot and Mini Image Net datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.展开更多
This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consen...This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consensus under average dwell time switching.Then sufficient conditions are derived to guarantee the positive consensus.The gain matrices of the control protocol are described using a matrix decomposition approach and the corresponding computational complexity is reduced by resorting to linear programming and co-positive Lyapunov functions.Finally,two numerical examples are provided to illustrate the results obtained.展开更多
In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Mu...In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education.展开更多
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier...With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.展开更多
Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making p...Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making problems,significantly enhancing swarm intelligence in maneuvering.However,applying MARL to unmanned swarms presents two primary challenges.First,defensive agents must balance autonomy with collaboration under limited perception while coordinating against adversaries.Second,current algorithms aim to maximize global or individual rewards,making them sensitive to fluctuations in enemy strategies and environmental changes,especially when rewards are sparse.To tackle these issues,we propose an algorithm of MultiAgent Reinforcement Learning with Layered Autonomy and Collaboration(MARL-LAC)for collaborative confrontations.This algorithm integrates dual twin Critics to mitigate the high variance associated with policy gradients.Furthermore,MARL-LAC employs layered autonomy and collaboration to address multi-objective problems,specifically learning a global reward function for the swarm alongside local reward functions for individual defensive agents.Experimental results demonstrate that MARL-LAC enhances decision-making and collaborative behaviors among agents,outperforming the existing algorithms and emphasizing the importance of layered autonomy and collaboration in multi-agent systems.The observed adversarial behaviors demonstrate that agents using MARL-LAC effectively maintain cohesive formations that conceal their intentions by confusing the offensive agent while successfully encircling the target.展开更多
This paper addresses the consensus problem of nonlinear multi-agent systems subject to external disturbances and uncertainties under denial-ofservice(DoS)attacks.Firstly,an observer-based state feedback control method...This paper addresses the consensus problem of nonlinear multi-agent systems subject to external disturbances and uncertainties under denial-ofservice(DoS)attacks.Firstly,an observer-based state feedback control method is employed to achieve secure control by estimating the system's state in real time.Secondly,by combining a memory-based adaptive eventtriggered mechanism with neural networks,the paper aims to approximate the nonlinear terms in the networked system and efficiently conserve system resources.Finally,based on a two-degree-of-freedom model of a vehicle affected by crosswinds,this paper constructs a multi-unmanned ground vehicle(Multi-UGV)system to validate the effectiveness of the proposed method.Simulation results show that the proposed control strategy can effectively handle external disturbances such as crosswinds in practical applications,ensuring the stability and reliable operation of the Multi-UGV system.展开更多
Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal ...Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal alignment,temporal consistency,and robust handling of noisy or incomplete inputs across multiple modalities.We propose Multi Agent-Chain of Thought(CoT),a novel multi-agent chain-of-thought reasoning framework where specialized agents for text,vision,and speech modalities collaboratively construct shared reasoning traces through inter-agent message passing and consensus voting mechanisms.Our architecture incorporates self-reflection modules,conflict resolution protocols,and dynamic rationale alignment to enhance consistency,factual accuracy,and user engagement.The framework employs a hierarchical attention mechanism with cross-modal fusion and implements adaptive reasoning depth based on dialogue complexity.Comprehensive evaluations on Situated Interactive Multi-Modal Conversations(SIMMC)2.0,VisDial v1.0,and newly introduced challenging scenarios demonstrate statistically significant improvements in grounding accuracy(p<0.01),chain-of-thought interpretability,and robustness to adversarial inputs compared to state-of-the-art monolithic transformer baselines and existing multi-agent approaches.展开更多
This paper presents Dual Adaptive Neural Topology(Dual ANT),a distributed dual-network metaadaptive framework that enhances ant-colony-based multi-agent coordination with online introspection,adaptive parameter contro...This paper presents Dual Adaptive Neural Topology(Dual ANT),a distributed dual-network metaadaptive framework that enhances ant-colony-based multi-agent coordination with online introspection,adaptive parameter control,and privacy-preserving interactions.This approach improves standard Ant Colony Optimization(ACO)with two lightweight neural components:a forward network that estimates swarm efficiency in real time and an inverse network that converts these descriptors into parameter adaptations.To preserve the privacy of individual trajectories in shared pheromone maps,we introduce a locally differentially private pheromone update mechanism that adds calibrated noise to each agent’s pheromone deposit while preserving the efficacy of the global pheromone signal.The resulting systemenables agents to dynamically and autonomously adapt their coordination strategies under challenging and dynamic conditions,including varying obstacle layouts,uncertain target locations,and time-varying disturbances.Extensive simulations of large grid-based search tasks demonstrated that Dual ANT achieved faster convergence,higher robustness,and improved scalability compared to advanced baselines such asMulti-StrategyACO and Hierarchical ACO.The meta-adaptive feedback loop compensates for the performance degradation caused by privacy noise and prevents premature stagnation by triggering Levy flight exploration only when necessary.展开更多
Formation control in multi-agent systems has become a critical area of interest due to its wide-ranging applications in robotics,autonomous transportation,and surveillance.While various studies have explored distribut...Formation control in multi-agent systems has become a critical area of interest due to its wide-ranging applications in robotics,autonomous transportation,and surveillance.While various studies have explored distributed cooperative control,this review focuses on the theoretical foundations and recent developments in formation control strategies.The paper categorizes and analyzes key formation types,including formation maintenance,group or cluster formation,bipartite formations,event-triggered formations,finite-time convergence,and constrained formations.A significant portion of the review addresses formation control under constrained dynamics,presenting both modelbased and model-free approaches that consider practical limitations such as actuator bounds,communication delays,and nonholonomic constraints.Additionally,the paper discusses emerging trends,including the integration of eventdriven mechanisms and AI-enhanced coordination strategies.Comparative evaluations highlight the trade-offs among various methodologies regarding scalability,robustness,and real-world feasibility.Practical implementations are reviewed across diverse platforms,and the review identifies the current achievements and unresolved challenges in the field.The paper concludes by outlining promising research directions,such as adaptive control for dynamic environments,energy-efficient coordination,and using learning-based control under uncertainty.This review synthesizes the current state of the art and provides a road map for future investigation,making it a valuable reference for researchers and practitioners aiming to advance formation control in multi-agent systems.展开更多
The development of chassis active safety control technology has improved vehicle stability under extreme conditions.However,its cross-system and multi-functional characteristics make the controller difficult to achiev...The development of chassis active safety control technology has improved vehicle stability under extreme conditions.However,its cross-system and multi-functional characteristics make the controller difficult to achieve cooperative goals.In addition,the chassis system,which has high complexity,numerous subsystems,and strong coupling,will also lead to low computing efficiency and poor control effect of the controller.Therefore,this paper proposes a scenario-driven hybrid distributed model predictive control algorithm with variable control topology.This algorithm divides multiple stability regions based on the vehicle’s β−γ phase plane,forming a mapping relationship between the control structure and the vehicle’s state.A control input fusion mechanism within the transition domain is designed to mitigate the problems of system state oscillation and control input jitter caused by switching control structures.Then,a distributed state-space equation with state coupling and input coupling characteristics is constructed,and a weighted local agent cost function in quadratic programming is derived.Through cost coupling,local agents can coordinate global performance goals.Finally,through Simulink/CarSim joint simulation and hardware-in-the-loop(HIL)test,the proposed algorithm is validated to improve vehicle stability while ensuring trajectory tracking accuracy and has good applicability for multi-objective coordinated control.This paper combines the advantages of distributed MPC and decentralized MPC,achieving a balance between approximating the global optimal results and the solution’s efficiency.展开更多
基金supported by the China Scholarship Council(201806950080)the Researchlab Autonomous Shipping(RAS)of Delft University of Technology,and the INTERREG North Sea Region Grant“AVATAR”funded by the European Regional Development Fund.
文摘Among the promising application of autonomous surface vessels(ASVs)is the utilization of multiple autonomous tugs for manipulating a floating object such as an oil platform,a broken ship,or a ship in port areas.Considering the real conditions and operations of maritime practice,this paper proposes a multi-agent control algorithm to manipulate a ship to a desired position with a desired heading and velocity under the environmental disturbances.The control architecture consists of a supervisory controller in the higher layer and tug controllers in the lower layer.The supervisory controller allocates the towing forces and angles between the tugs and the ship by minimizing the error in the position and velocity of the ship.The weight coefficients in the cost function are designed to be adaptive to guarantee that the towing system functions well under environmental disturbances,and to enhance the efficiency of the towing system.The tug controller provides the forces to tow the ship and tracks the reference trajectory that is computed online based on the towing angles calculated by the supervisory controller.Simulation results show that the proposed algorithm can make the two autonomous tugs cooperatively tow a ship to a desired position with a desired heading and velocity under the(even harsh)environmental disturbances.
基金supported by UK s Engineering and Physical Sciences Research Council(EPSRC)(No.EP/D078350/1)
文摘The high redundancy actuator(HRA)concept is a novel approach to fault tolerant actuation that uses a high number of small actuation elements,assembled in series and parallel in order to form a single actuator which has intrinsic fault tolerance.Whilst this structure afords resilience under passive control methods alone,active control approaches are likely to provide higher levels of performance.A multiple-model control scheme for an HRA applied through the framework of multi-agent control is presented here.The application of this approach to a 10×10 HRA is discussed and consideration of reconfguration delays and fault detection errors are made.The example shows that multi-agent control can provide tangible performance improvements and increase fault tolerance in comparison to a passive fault tolerant approach.Reconfguration delays are shown to be tolerable,and a strategy for handling false fault detections is detailed.
基金supported in part by the Beijing Natural Science Foundation under Grant 4252050in part by the National Science Fund for Distinguished Young Scholars under Grant 62425304in part by the Basic Science Center Programs of NSFC under Grant 62088101.
文摘This paper investigates the consensus tracking control problem for high order nonlinear multi-agent systems subject to non-affine faults,partial measurable states,uncertain control coefficients,and unknown external disturbances.Under the directed topology conditions,an observer-based finite-time control strategy based on adaptive backstepping and is proposed,in which a neural network-based state observer is employed to approximate the unmeasurable system state variables.To address the complexity explosion problem associated with the backstepping method,a finite-time command filter is incorporated,with error compensation signals designed to mitigate the filter-induced errors.Additionally,the Butterworth low-pass filter is introduced to avoid the algebraic ring problem in the design of the controller.The finite-time stability of the closed-loop system is rigorously analyzed with the finite-time Lyapunov stability criterion,validating that all closed-loop signals of the system remain bounded within a finite time.Finally,the effectiveness of the proposed control strategy is verified through a simulation example.
基金supported by the National Natural Science Foundation of China(62333011,62020106003)the Natural Science Foundation of Jiangsu Province of China(BK20222012)+1 种基金the Fundamental Research Funds for the Central Universities(NE2024005)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX24_0594)。
文摘This paper is concerned with adaptive consensus tracking control of nonlinear multi-agent systems with actuator faults and unknown nonidentical control directions under double semi-Markovian switching topologies.Considering the complex working environment and the stability differences in communication links between leaders and followers,a double semi-Markov process is first introduced to describe the random switching of communication topologies in the leader-follower structure.In order to address challenges from the unknown nonidentical control directions and partial loss of effectiveness actuator faults,a completely independent parameter is introduced into the Nussbaum function to overcome the inherent obstacle of mutual cancellation and avoid the rapid growth rate.Considering only the state information of agents is transmitted among the agents,an adaptive distributed fault-tolerant consensus tracking control is proposed based on the double semi-Markovian switching topologies using the designed Nussbaum function.Furthermore,the stability of the closed-loop nonlinear multi-agent systems is analyzed using contradiction argument and Lyapunov theorem,from which the asymptotic consensus tracking in mean square sense can be obtained.A numerical simulation example is provided to verify the effectiveness of the proposed algorithm.
基金National Natural Science Foundation of China(No.12071370)。
文摘The bipartite containment control problem for heterogeneous nonlinear multi-agent systems(HNMASs)within multi-group networks under signed digraphs is investigated,where the first-order and second-order nonlinear dynamic agents belong to distinct groups.Interactions are cooperative-antagonistic within each group and sign-in-degree balanced across the inter-groups.Firstly,a state feedback control protocol is designed to ensure that the trajectories of followers in diverse groups can converge to distinct convex hulls formed by their corresponding leaders,respectively.As an extension,the bipartite control problem with time-variant formation for the multi-agent system(MAS)is also considered,and a corresponding control protocol with formation compensation vectors is given.Finally,in view of Lyapunov stability theory and matrix inequality,the sufficient conditions for realizing the bipartite containment control are obtained,and several simulations are provided to verify the validity of the above methods.
基金supported in part by the National Natural Science Foundation of China(62522320,92267108,62173322)Liaoning Revitalization Talents Program(XLYC2403062)the Science and Technology Program of Liaoning Province(2023JH3/10200004,2022JH25/10100005)。
文摘The wireless cloud robotic system(WCRS),which fully integrates sensing,communication,computing,and control capabilities as an intelligent agent,is a promising way to achieve intelligent manufacturing due to easy deployment and flexible expansion.However,the high-precision control of WCRS requires deterministic wireless communication,which is always challenging in the complex and dynamic radio space.This paper employs the reconfigurable intelligent surface(RIS)to establish a novel RIS-assisted WCRS architecture,where the radio channel is controlled to achieve ultra-reliable,low-delay,and low-jitter communication for high-precision closed-loop motion control.However,control and communication are strongly coupled and should be co-optimized.Fully considering the constraints of control input threshold,control delay deadline,beam phase,antenna power,and information distortion,we establish a stability maximization problem to jointly optimize control input compensation,RIS phase shift,and beamforming.Herein,a new jitter-oriented system stability objective with respect to control error and communication jitter is defined and the closed-form expression of control delay deadline is derived based on the Jensen Inequality and Lyapunov-Krasovskii functional.Due to the time-varying and partial observability of the channel and robot states,we model the problem as a partially observable Markov decision process(POMDP).To solve this complex problem,we propose a multi-agent transfer reinforcement learning algorithm named LSTM-PPO-MATRL,where the LSTM-enhanced proximal policy optimization(PPO)is designed to approximate an optimal solution and the option-guided policy transfer learning is proposed to facilitate the learning process.By centralized training and decentralized execution,LSTM-PPO-MATRL is validated by extensive experiments on MuJoCo tasks for both low-mobility and high-mobility robotic control scenarios.The results demonstrate that LSTM-PPO-MATRL not only realizes high learning efficiency,but also supports low-delay,low-jitter communication for low error control,where 71.9%control accuracy improvement and 68.7%delay jitter reduction are achieved compared to the PPO-MADRL baseline.
基金supported by the National Natural Science Foundation of China(No.52477097)the GuangDong Basic and Applied Basic Research Foundation(2023A1515240014)the State Key Laboratory of Advanced Electromagnetic Technology(Grant No.AET 2024KF005).
文摘To maximize the profits of power grid operators(GOs),load aggregators(LAs)and electricity customers(ECs),this paper proposes a hierarchical demand response(HDR)framework that considers competing interaction based on multiagent deep deterministic policy gradient(MaDDPG).The ECs are divided into conventional ECs and the electric vehicles(EVs)which are managed by ECs agent(ECA)and EV agent(EVA)to exploit the flexibility of the HDR framework.Thus,the HDR is a tri-layer model determined by five types of agents engaging in competing interaction to maximize their own profits.To address the limitations of mathematical expression and participation scale in the Stackelberg game within the HDR model,a dynamic interaction mechanism is adopted.Moreover,to tackle the HDR involving various entities,the MaDDPG develops multiple agents to simulation the dynamic competing interactions between each subject as well as solve the problem of continuous action control.Furthermore,MaDDPG adopts soft target update and priority experience replay method to ensure stable and effective training,and makes the exploration strategy comprehensive by using exploration noise.Simulation studies are conducted to verify the performance of the MaDDPG with dynamic interaction mechanism in dealing with multilayer multi-agent continuous action control,compared to the double deep Q network(DDQN),deep Q network(DQN)and dueling DQN.Additionally,comparisons among the proposed HDR with the price based DR(PBDR)and incentive based DR(IBDR)are analyzed to investigate the flexibility of the HDR.
基金supported in part by the National Natural Science Foundation of China(grants 62203073 and 62573068)the Natural Science Foundation of Chongqing,China(grant CSTB2022NSCQMSX0577)。
文摘Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused by the strong inter-agent coupling nature embedded in an MARL problem,which is yet to be fully exploited in existing algorithms.In this work,we recognize a learning graph characterizing the dependence between individual rewards and individual policies.Then we propose a graph-based reward aggregation(GRA)method,which utilizes the inherent coupling relationship among agents to eliminate redundant information.Specifically,GRA passes information among cooperating agents through graph attention networks to obtain aggregated rewards that contribute to the fitting of the value function,making each agent learn a decentralized executable cooperation policy.In addition,we propose a variant of GRA,named GRA-decen,which achieves decentralized training and decentralized execution(DTDE)when each agent only has access to information of partial agents in the learning process.We conduct experiments in different environments and demonstrate the practicality and scalability of our algorithms.
基金supported by the National Natural Science Foundation of China under Grants 61962023,61562029 and 62466019.
文摘This paper presents an adaptive multi-agent coordination(AMAC)strategy suitable for complex scenarios,which only requires information exchange between neighbouring robots.Unlike traditional multi-agent coordination methods that are solved by neural dynamics,the proposed strategy displays greater flexibility,adaptability and scalability.Furthermore,the proposed AMAC strategy is reconstructed as a time-varying complex-valued matrix equation.By introducing a dynamic error function,a fixed-time convergent zeroing neural network(FTCZNN)model is designed for the online solution of the AMAC strategy,with its convergence time upper bound derived theoretically.Finally,the effectiveness and applicability of the coordination control method are demonstrated by numerical simulations and physical experiments.Numerical results indicate that this method can reduce the formation error to the order of 10^(-6)within 1.8 s.
文摘This paper addresses the synchronization of follower agents’state vectors with that of a leader in high-order nonlinear multi-agent systems.The proposed low-complexity control scheme employs high-gain observers to estimate higher-order synchronization errors,enabling the controller to rely solely on relative output measurements.This approach significantly reduces the dependence on full-state information,which is often infeasible or costly in practical engineering applications.An output feedback control strategy is developed to overcome these limitations while ensuring robust and effective synchronization.Simulation results are provided to demonstrate the effectiveness of the proposed approach and validate the theoretical findings.
基金supported by the National Natural Science Foundation of China Youth Fund(No.62101579)。
文摘Multi-Agent Systems(MAS),which consist of multiple interacting agents,are crucial in Cyber-Physical Systems(CPS),because they improve system adaptability,efficiency,and robustness through parallel processing and collaboration.However,most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents.Meta-GMVAE,based on Variational Autoencoder(VAE)and set-level variational inference,represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks,increasing adaptability and reducing sample requirements.Inspired by these advancements,we propose a novel Distributed Unsupervised Meta-Learning(DUML)framework based on Meta-GMVAE and a fusion strategy.Furthermore,we present a DUML algorithm based on Gaussian Mixture Model(DUMLGMM),where the parameters of the Gaussian-mixture are solved by an Expectation-Maximization algorithm.Simulations on Omniglot and Mini Image Net datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.
基金supported by the National Natural Science Foundation of China(62463007,62463005)the Natural Science Foundation of Hainan Province(625RC710,625MS047)+1 种基金the System Control and Information Processing Education Ministry Key Laboratory Open Funding,China(Scip20240119)the Science Research Funding of Hainan University,China(KYQD(ZR)22180,KYQD(ZR)23180).
文摘This paper focuses on the leader-following positive consensus problems of heterogeneous switched multi-agent systems.First,a state-feedback controller with dynamic compensation is introduced to achieve positive consensus under average dwell time switching.Then sufficient conditions are derived to guarantee the positive consensus.The gain matrices of the control protocol are described using a matrix decomposition approach and the corresponding computational complexity is reduced by resorting to linear programming and co-positive Lyapunov functions.Finally,two numerical examples are provided to illustrate the results obtained.
文摘In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education.
文摘With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics.
基金co-supported by the National Natural Science Foundation of China(Nos.72371052 and 71871042).
文摘Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making problems,significantly enhancing swarm intelligence in maneuvering.However,applying MARL to unmanned swarms presents two primary challenges.First,defensive agents must balance autonomy with collaboration under limited perception while coordinating against adversaries.Second,current algorithms aim to maximize global or individual rewards,making them sensitive to fluctuations in enemy strategies and environmental changes,especially when rewards are sparse.To tackle these issues,we propose an algorithm of MultiAgent Reinforcement Learning with Layered Autonomy and Collaboration(MARL-LAC)for collaborative confrontations.This algorithm integrates dual twin Critics to mitigate the high variance associated with policy gradients.Furthermore,MARL-LAC employs layered autonomy and collaboration to address multi-objective problems,specifically learning a global reward function for the swarm alongside local reward functions for individual defensive agents.Experimental results demonstrate that MARL-LAC enhances decision-making and collaborative behaviors among agents,outperforming the existing algorithms and emphasizing the importance of layered autonomy and collaboration in multi-agent systems.The observed adversarial behaviors demonstrate that agents using MARL-LAC effectively maintain cohesive formations that conceal their intentions by confusing the offensive agent while successfully encircling the target.
基金The National Natural Science Foundation of China(W2431048)The Science and Technology Research Program of Chongqing Municipal Education Commission,China(KJZDK202300807)The Chongqing Natural Science Foundation,China(CSTB2024NSCQQCXMX0052).
文摘This paper addresses the consensus problem of nonlinear multi-agent systems subject to external disturbances and uncertainties under denial-ofservice(DoS)attacks.Firstly,an observer-based state feedback control method is employed to achieve secure control by estimating the system's state in real time.Secondly,by combining a memory-based adaptive eventtriggered mechanism with neural networks,the paper aims to approximate the nonlinear terms in the networked system and efficiently conserve system resources.Finally,based on a two-degree-of-freedom model of a vehicle affected by crosswinds,this paper constructs a multi-unmanned ground vehicle(Multi-UGV)system to validate the effectiveness of the proposed method.Simulation results show that the proposed control strategy can effectively handle external disturbances such as crosswinds in practical applications,ensuring the stability and reliable operation of the Multi-UGV system.
文摘Multimodal dialogue systems often fail to maintain coherent reasoning over extended conversations and suffer from hallucination due to limited context modeling capabilities.Current approaches struggle with crossmodal alignment,temporal consistency,and robust handling of noisy or incomplete inputs across multiple modalities.We propose Multi Agent-Chain of Thought(CoT),a novel multi-agent chain-of-thought reasoning framework where specialized agents for text,vision,and speech modalities collaboratively construct shared reasoning traces through inter-agent message passing and consensus voting mechanisms.Our architecture incorporates self-reflection modules,conflict resolution protocols,and dynamic rationale alignment to enhance consistency,factual accuracy,and user engagement.The framework employs a hierarchical attention mechanism with cross-modal fusion and implements adaptive reasoning depth based on dialogue complexity.Comprehensive evaluations on Situated Interactive Multi-Modal Conversations(SIMMC)2.0,VisDial v1.0,and newly introduced challenging scenarios demonstrate statistically significant improvements in grounding accuracy(p<0.01),chain-of-thought interpretability,and robustness to adversarial inputs compared to state-of-the-art monolithic transformer baselines and existing multi-agent approaches.
基金funded by the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,under project number NBU-FFR-2026-2441-02.
文摘This paper presents Dual Adaptive Neural Topology(Dual ANT),a distributed dual-network metaadaptive framework that enhances ant-colony-based multi-agent coordination with online introspection,adaptive parameter control,and privacy-preserving interactions.This approach improves standard Ant Colony Optimization(ACO)with two lightweight neural components:a forward network that estimates swarm efficiency in real time and an inverse network that converts these descriptors into parameter adaptations.To preserve the privacy of individual trajectories in shared pheromone maps,we introduce a locally differentially private pheromone update mechanism that adds calibrated noise to each agent’s pheromone deposit while preserving the efficacy of the global pheromone signal.The resulting systemenables agents to dynamically and autonomously adapt their coordination strategies under challenging and dynamic conditions,including varying obstacle layouts,uncertain target locations,and time-varying disturbances.Extensive simulations of large grid-based search tasks demonstrated that Dual ANT achieved faster convergence,higher robustness,and improved scalability compared to advanced baselines such asMulti-StrategyACO and Hierarchical ACO.The meta-adaptive feedback loop compensates for the performance degradation caused by privacy noise and prevents premature stagnation by triggering Levy flight exploration only when necessary.
基金supported in part by the National Natural Science Foundation of China under Grant 6237319in part by the Postgraduate Research and Practice Innovation Program of Jiangsu Province under Grant KYCX230479.
文摘Formation control in multi-agent systems has become a critical area of interest due to its wide-ranging applications in robotics,autonomous transportation,and surveillance.While various studies have explored distributed cooperative control,this review focuses on the theoretical foundations and recent developments in formation control strategies.The paper categorizes and analyzes key formation types,including formation maintenance,group or cluster formation,bipartite formations,event-triggered formations,finite-time convergence,and constrained formations.A significant portion of the review addresses formation control under constrained dynamics,presenting both modelbased and model-free approaches that consider practical limitations such as actuator bounds,communication delays,and nonholonomic constraints.Additionally,the paper discusses emerging trends,including the integration of eventdriven mechanisms and AI-enhanced coordination strategies.Comparative evaluations highlight the trade-offs among various methodologies regarding scalability,robustness,and real-world feasibility.Practical implementations are reviewed across diverse platforms,and the review identifies the current achievements and unresolved challenges in the field.The paper concludes by outlining promising research directions,such as adaptive control for dynamic environments,energy-efficient coordination,and using learning-based control under uncertainty.This review synthesizes the current state of the art and provides a road map for future investigation,making it a valuable reference for researchers and practitioners aiming to advance formation control in multi-agent systems.
基金Supported by National Natural Science Foundation of China(Grant Nos.52225212,52272418,U22A20100)National Key Research and Development Program of China(Grant No.2022YFB2503302).
文摘The development of chassis active safety control technology has improved vehicle stability under extreme conditions.However,its cross-system and multi-functional characteristics make the controller difficult to achieve cooperative goals.In addition,the chassis system,which has high complexity,numerous subsystems,and strong coupling,will also lead to low computing efficiency and poor control effect of the controller.Therefore,this paper proposes a scenario-driven hybrid distributed model predictive control algorithm with variable control topology.This algorithm divides multiple stability regions based on the vehicle’s β−γ phase plane,forming a mapping relationship between the control structure and the vehicle’s state.A control input fusion mechanism within the transition domain is designed to mitigate the problems of system state oscillation and control input jitter caused by switching control structures.Then,a distributed state-space equation with state coupling and input coupling characteristics is constructed,and a weighted local agent cost function in quadratic programming is derived.Through cost coupling,local agents can coordinate global performance goals.Finally,through Simulink/CarSim joint simulation and hardware-in-the-loop(HIL)test,the proposed algorithm is validated to improve vehicle stability while ensuring trajectory tracking accuracy and has good applicability for multi-objective coordinated control.This paper combines the advantages of distributed MPC and decentralized MPC,achieving a balance between approximating the global optimal results and the solution’s efficiency.