Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope ...Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.展开更多
Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus o...Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.展开更多
The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environ...The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environment on defense decisions,thus resulting in poor defense effectiveness.Therefore,this paper proposes a cloud boundary network active defense model and decision method based on the reinforcement learning of intelligent agent,designs the network structure of the intelligent agent attack and defense game,and depicts the attack and defense game process of cloud boundary network;constructs the observation space and action space of reinforcement learning of intelligent agent in the non-complete information environment,and portrays the interaction process between intelligent agent and environment;establishes the reward mechanism based on the attack and defense gain,and encourage intelligent agents to learn more effective defense strategies.the designed active defense decision intelligent agent based on deep reinforcement learning can solve the problems of border dynamics,interaction lag,and control dispersion in the defense decision process of cloud boundary networks,and improve the autonomy and continuity of defense decisions.展开更多
文摘Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.
基金supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA))supported by the National Natural Science Foundation of China under Grant No. 61971264the National Natural Science Foundation of China/Research Grants Council Collaborative Research Scheme under Grant No. 62261160390
文摘Due to the fading characteristics of wireless channels and the burstiness of data traffic,how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging.In this paper,we focus on enabling congestion control to minimize network transmission delays through flexible power control.To effectively solve the congestion problem,we propose a distributed cross-layer scheduling algorithm,which is empowered by graph-based multi-agent deep reinforcement learning.The transmit power is adaptively adjusted in real-time by our algorithm based only on local information(i.e.,channel state information and queue length)and local communication(i.e.,information exchanged with neighbors).Moreover,the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network.In the evaluation,we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states,and demonstrate the adaptability and stability in different topologies.The method is general and can be extended to various types of topologies.
基金supported in part by the National Natural Science Foundation of China(62106053)the Guangxi Natural Science Foundation(2020GXNSFBA159042)+2 种基金Innovation Project of Guangxi Graduate Education(YCSW2023478)the Guangxi Education Department Program(2021KY0347)the Doctoral Fund of Guangxi University of Science and Technology(XiaoKe Bo19Z33)。
文摘The cloud boundary network environment is characterized by a passive defense strategy,discrete defense actions,and delayed defense feedback in the face of network attacks,ignoring the influence of the external environment on defense decisions,thus resulting in poor defense effectiveness.Therefore,this paper proposes a cloud boundary network active defense model and decision method based on the reinforcement learning of intelligent agent,designs the network structure of the intelligent agent attack and defense game,and depicts the attack and defense game process of cloud boundary network;constructs the observation space and action space of reinforcement learning of intelligent agent in the non-complete information environment,and portrays the interaction process between intelligent agent and environment;establishes the reward mechanism based on the attack and defense gain,and encourage intelligent agents to learn more effective defense strategies.the designed active defense decision intelligent agent based on deep reinforcement learning can solve the problems of border dynamics,interaction lag,and control dispersion in the defense decision process of cloud boundary networks,and improve the autonomy and continuity of defense decisions.