期刊文献+
共找到1,129篇文章
< 1 2 57 >
每页显示 20 50 100
Graph-based multi-agent reinforcement learning for collaborative search and tracking of multiple UAVs 被引量:2
1
作者 Bocheng ZHAO Mingying HUO +4 位作者 Zheng LI Wenyu FENG Ze YU Naiming QI Shaohai WANG 《Chinese Journal of Aeronautics》 2025年第3期109-123,共15页
This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary obj... This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments. 展开更多
关键词 Unmanned aerial vehicle(UAV) Multi-agent reinforcement learning(MARL) Graph attention network(GAT) Tracking Dynamic and unknown environment
原文传递
Rule-Guidance Reinforcement Learning for Lane Change Decision-making:A Risk Assessment Approach 被引量:1
2
作者 Lu Xiong Zhuoren Li +2 位作者 Danyang Zhong Puhang Xu Chen Tang 《Chinese Journal of Mechanical Engineering》 2025年第2期344-359,共16页
To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforce... To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforcement learning with rule-based decision-making methods.A risk assessment model for lane-change maneuvers considering uncertain predictions of surrounding vehicles is established as a safety filter to improve learning efficiency while correcting dangerous actions for safety enhancement.On this basis,a Risk-fused DDQN is constructed utilizing the model-based risk assessment and supervision mechanism.The proposed reinforcement learning algorithm sets up a separate experience buffer for dangerous trials and punishes such actions,which is shown to improve the sampling efficiency and training outcomes.Compared with conventional DDQN methods,the proposed algorithm improves the convergence value of cumulated reward by 7.6%and 2.2%in the two constructed scenarios in the simulation study and reduces the number of training episodes by 52.2%and 66.8%respectively.The success rate of lane change is improved by 57.3%while the time headway is increased at least by 16.5%in real vehicle tests,which confirms the higher training efficiency,scenario adaptability,and security of the proposed Risk-fused DDQN. 展开更多
关键词 Autonomous driving reinforcement learning DECISION-MAKING Risk assessment Safety filter
在线阅读 下载PDF
Deep reinforcement learning based integrated evasion and impact hierarchical intelligent policy of exo-atmospheric vehicles 被引量:1
3
作者 Leliang REN Weilin GUO +3 位作者 Yong XIAN Zhenyu LIU Daqiao ZHANG Shaopeng LI 《Chinese Journal of Aeronautics》 2025年第1期409-426,共18页
Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision u... Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value. 展开更多
关键词 Exo-atmospheric vehicle Integrated evasion and impact Deep reinforcement learning Hierarchical intelligent policy Single-chip microcomputer Miss distance
原文传递
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
4
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
5
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 MULTI-TASK multi-agent reinforcement learning large language models
在线阅读 下载PDF
Intelligent path planning for small modular reactors based on improved reinforcement learning
6
作者 DONG Yun-Feng ZHOU Wei-Zheng +1 位作者 WANG Zhe-Zheng ZHANG Xiao 《四川大学学报(自然科学版)》 北大核心 2025年第4期1006-1014,共9页
Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous... Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control process of SMR can be divided into three stages,say,state diagnosis,autonomous decision-making and coordinated control.In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network(ANN)technology,which constructs a basis for the state judgment of intelligent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the full range control path intelligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future. 展开更多
关键词 Small modular reactor Operating condition recognition Path planning reinforcement learning
在线阅读 下载PDF
Pathfinder:Deep Reinforcement Learning-Based Scheduling for Multi-Robot Systems in Smart Factories with Mass Customization
7
作者 Chenxi Lyu Chen Dong +3 位作者 Qiancheng Xiong Yuzhong Chen Qian Weng Zhenyi Chen 《Computers, Materials & Continua》 2025年第8期3371-3391,共21页
The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability an... The rapid advancement of Industry 4.0 has revolutionized manufacturing,shifting production from centralized control to decentralized,intelligent systems.Smart factories are now expected to achieve high adaptability and resource efficiency,particularly in mass customization scenarios where production schedules must accommodate dynamic and personalized demands.To address the challenges of dynamic task allocation,uncertainty,and realtime decision-making,this paper proposes Pathfinder,a deep reinforcement learning-based scheduling framework.Pathfinder models scheduling data through three key matrices:execution time(the time required for a job to complete),completion time(the actual time at which a job is finished),and efficiency(the performance of executing a single job).By leveraging neural networks,Pathfinder extracts essential features from these matrices,enabling intelligent decision-making in dynamic production environments.Unlike traditional approaches with fixed scheduling rules,Pathfinder dynamically selects from ten diverse scheduling rules,optimizing decisions based on real-time environmental conditions.To further enhance scheduling efficiency,a specialized reward function is designed to support dynamic task allocation and real-time adjustments.This function helps Pathfinder continuously refine its scheduling strategy,improving machine utilization and minimizing job completion times.Through reinforcement learning,Pathfinder adapts to evolving production demands,ensuring robust performance in real-world applications.Experimental results demonstrate that Pathfinder outperforms traditional scheduling approaches,offering improved coordination and efficiency in smart factories.By integrating deep reinforcement learning,adaptable scheduling strategies,and an innovative reward function,Pathfinder provides an effective solution to the growing challenges of multi-robot job scheduling in mass customization environments. 展开更多
关键词 Smart factory CUSTOMIZATION deep reinforcement learning production scheduling multi-robot system task allocation
在线阅读 下载PDF
Privacy Preserving Federated Anomaly Detection in IoT Edge Computing Using Bayesian Game Reinforcement Learning
8
作者 Fatima Asiri Wajdan Al Malwi +4 位作者 Fahad Masood Mohammed S.Alshehri Tamara Zhukabayeva Syed Aziz Shah Jawad Ahmad 《Computers, Materials & Continua》 2025年第8期3943-3960,共18页
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha... Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks. 展开更多
关键词 IOT edge computing smart homes anomaly detection Bayesian game theory reinforcement learning
在线阅读 下载PDF
A deep reinforcement learning framework and its implementation for UAV-aided covert communication
9
作者 Shu FU Yi SU +1 位作者 Zhi ZHANG Liuguo YIN 《Chinese Journal of Aeronautics》 2025年第2期403-417,共15页
In this work,we consider an Unmanned Aerial Vehicle(UAV)-aided covert transmission network,which adopts the uplink transmission of Communication Nodes(CNs)as a cover to facilitate covert transmission to a Primary Comm... In this work,we consider an Unmanned Aerial Vehicle(UAV)-aided covert transmission network,which adopts the uplink transmission of Communication Nodes(CNs)as a cover to facilitate covert transmission to a Primary Communication Node(PCN).Specifically,all nodes transmit to the UAV exploiting uplink non-Orthogonal Multiple Access(NOMA),while the UAV performs covert transmission to the PCN at the same frequency.To minimize the average age of covert information,we formulate a joint optimization problem of UAV trajectory and power allocation designing subject to multi-dimensional constraints including covertness demand,communication quality requirement,maximum flying speed,and the maximum available resources.To address this problem,we embed Signomial Programming(SP)into Deep Reinforcement Learning(DRL)and propose a DRL framework capable of handling the constrained Markov decision processes,named SP embedded Soft Actor-Critic(SSAC).By adopting SSAC,we achieve the joint optimization of UAV trajectory and power allocation.Our simulations show the optimized UAV trajectory and verify the superiority of SSAC compared with various existing baseline schemes.The results of this study suggest that by maintaining appropriate distances from both the PCN and CNs,one can effectively enhance the performance of covert communication by reducing the detection probability of the CNs. 展开更多
关键词 Covert communication Unmanned aerial vehicle Deep reinforcement learning Trajectory planning Power allocation Communication systems
原文传递
Enhanced deep reinforcement learning for integrated navigation in multi-UAV systems
10
作者 Zhengyang CAO Gang CHEN 《Chinese Journal of Aeronautics》 2025年第8期119-138,共20页
In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies an... In multiple Unmanned Aerial Vehicles(UAV)systems,achieving efficient navigation is essential for executing complex tasks and enhancing autonomy.Traditional navigation methods depend on predefined control strategies and trajectory planning and often perform poorly in complex environments.To improve the UAV-environment interaction efficiency,this study proposes a multi-UAV integrated navigation algorithm based on Deep Reinforcement Learning(DRL).This algorithm integrates the Inertial Navigation System(INS),Global Navigation Satellite System(GNSS),and Visual Navigation System(VNS)for comprehensive information fusion.Specifically,an improved multi-UAV integrated navigation algorithm called Information Fusion with MultiAgent Deep Deterministic Policy Gradient(IF-MADDPG)was developed.This algorithm enables UAVs to learn collaboratively and optimize their flight trajectories in real time.Through simulations and experiments,test scenarios in GNSS-denied environments were constructed to evaluate the effectiveness of the algorithm.The experimental results demonstrate that the IF-MADDPG algorithm significantly enhances the collaborative navigation capabilities of multiple UAVs in formation maintenance and GNSS-denied environments.Additionally,it has advantages in terms of mission completion time.This study provides a novel approach for efficient collaboration in multi-UAV systems,which significantly improves the robustness and adaptability of navigation systems. 展开更多
关键词 Multi-UAV system reinforcement learning Integrated navigation MADDPG Information fusion
原文传递
Achievement of Fish School Milling Motion Based on Distributed Multi-agent Reinforcement Learning
11
作者 Jincun Liu Yinjie Ren +3 位作者 Yang Liu Yan Meng Dong An Yaoguang Wei 《Journal of Bionic Engineering》 2025年第4期1683-1701,共19页
In recent years,significant research attention has been directed towards swarm intelligence.The Milling behavior of fish schools,a prime example of swarm intelligence,shows how simple rules followed by individual agen... In recent years,significant research attention has been directed towards swarm intelligence.The Milling behavior of fish schools,a prime example of swarm intelligence,shows how simple rules followed by individual agents lead to complex collective behaviors.This paper studies Multi-Agent Reinforcement Learning to simulate fish schooling behavior,overcoming the challenges of tuning parameters in traditional models and addressing the limitations of single-agent methods in multi-agent environments.Based on this foundation,a novel Graph Convolutional Networks(GCN)-Critic MADDPG algorithm leveraging GCN is proposed to enhance cooperation among agents in a multi-agent system.Simulation experiments demonstrate that,compared to traditional single-agent algorithms,the proposed method not only exhibits significant advantages in terms of convergence speed and stability but also achieves tighter group formations and more naturally aligned Milling behavior.Additionally,a fish school self-organizing behavior research platform based on an event-triggered mechanism has been developed,providing a robust tool for exploring dynamic behavioral changes under various conditions. 展开更多
关键词 Collective motion Collective behavior SELF-ORGANIZATION Fish school Multi-agent reinforcement learning
在线阅读 下载PDF
Trajectory optimization for UAV-enabled relaying with reinforcement learning
12
作者 Chiya Zhang Xinjie Li +2 位作者 Chunlong He Xingquan Li Dongping Lin 《Digital Communications and Networks》 2025年第1期200-209,共10页
In this paper,we investigate the application of the Unmanned Aerial Vehicle(UAV)-enabled relaying system in emergency communications,where one UAV is applied as a relay to help transmit information from ground users t... In this paper,we investigate the application of the Unmanned Aerial Vehicle(UAV)-enabled relaying system in emergency communications,where one UAV is applied as a relay to help transmit information from ground users to a Base Station(BS).We maximize the total transmitted data from the users to the BS,by optimizing the user communication scheduling and association along with the power allocation and the trajectory of the UAV.To solve this non-convex optimization problem,we propose the traditional Convex Optimization(CO)and the Reinforcement Learning(RL)-based approaches.Specifically,we apply the block coordinate descent and successive convex approximation techniques in the CO approach,while applying the soft actor-critic algorithm in the RL approach.The simulation results show that both approaches can solve the proposed optimization problem and obtain good results.Moreover,the RL approach establishes emergency communications more rapidly than the CO approach once the training process has been completed. 展开更多
关键词 Unmanned aerial vehicle Emergency communications Trajectory optimization Convex optimization reinforcement learning
在线阅读 下载PDF
AoI-aware transmission control in real-time mmwave energy harvesting systems: a risk-sensitive reinforcement learning approach
13
作者 Marzieh Sheikhi Vesal Hakami 《Digital Communications and Networks》 2025年第3期850-865,共16页
The evolution of enabling technologies in wireless communications has paved the way for supporting novel applications with more demanding QoS requirements,but at the cost of increasing the complexity of optimizing the... The evolution of enabling technologies in wireless communications has paved the way for supporting novel applications with more demanding QoS requirements,but at the cost of increasing the complexity of optimizing the digital communication chain.In particular,Millimeter Wave(mmWave)communications provide an abundance of bandwidth,and energy harvesting supplies the network with a continual source of energy to facilitate self-sustainability;however,harnessing these technologies is challenging due to the stochastic dynamics of the mmWave channel as well as the random sporadic nature of the harvested energy.In this paper,we aim at the dynamic optimization of update transmissions in mmWave energy harvesting systems in terms of Age of Information(AoI).AoI has recently been introduced to quantify information freshness and is a more stringent QoS metric compared to conventional delay and throughput.However,most prior art has only addressed averagebased AoI metrics,which can be insufficient to capture the occurrence of rare but high-impact freshness violation events in time-critical scenarios.We formulate a control problem that aims to minimize the long-term entropic risk measure of AoI samples by configuring the“sense&transmit”of updates.Due to the high complexity of the exponential cost function,we reformulate the problem with an approximated mean-variance risk measure as the new objective.Under unknown system statistics,we propose a two-timescale model-free risk-sensitive reinforcement learning algorithm to compute a control policy that adapts to the trio of channel,energy,and AoI states.We evaluate the efficiency of the proposed scheme through extensive simulations. 展开更多
关键词 Age of information Millimeter wave Energy harvesting Risk-sensitive Model-free reinforcement learning
在线阅读 下载PDF
Deep Synchronization Control of Grid-Forming Converters:A Reinforcement Learning Approach
14
作者 Zhuorui Wu Meng Zhang +2 位作者 Bo Fan Yang Shi Xiaohong Guan 《IEEE/CAA Journal of Automatica Sinica》 2025年第1期273-275,共3页
Dear Editor,This letter proposes a deep synchronization control(DSC) method to synchronize grid-forming converters with power grids. The method involves constructing a novel controller for grid-forming converters base... Dear Editor,This letter proposes a deep synchronization control(DSC) method to synchronize grid-forming converters with power grids. The method involves constructing a novel controller for grid-forming converters based on the stable deep dynamics model. To enhance the performance of the controller, the dynamics model is optimized within the deep reinforcement learning(DRL) framework. Simulation results verify that the proposed method can reduce frequency deviation and improve active power responses. 展开更多
关键词 reduce frequency deviation enhance performance stable deep dynamics model improve active power responses deep reinforcement learning drl dynamics model deep synchronization control dsc deep synchronization control
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
15
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing Network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Can reinforcement learning effectively prevent depression relapse?
16
作者 Haewon Byeon 《World Journal of Psychiatry》 2025年第8期71-79,共9页
Depression is a prevalent mental health disorder characterized by high relapse rates,highlighting the need for effective preventive interventions.This paper reviews the potential of reinforcement learning(RL)in preven... Depression is a prevalent mental health disorder characterized by high relapse rates,highlighting the need for effective preventive interventions.This paper reviews the potential of reinforcement learning(RL)in preventing depression relapse.RL,a subset of artificial intelligence,utilizes machine learning algorithms to analyze behavioral data,enabling early detection of relapse risk and optimization of personalized interventions.RL's ability to tailor treatment in real-time by adapting to individual needs and responses offers a dynamic alternative to traditional therapeutic approaches.Studies have demonstrated the efficacy of RL in customizing e-Health interventions and integrating mobile sensing with machine learning for adaptive mental health systems.Despite these advantages,challenges remain in algorithmic complexity,ethical considerations,and clinical implementation.Addressing these issues is crucial for the successful integration of RL into mental health care.This paper concludes with recommendations for future research directions,emphasizing the need for larger-scale studies and interdisciplinary collaboration to fully realize RL’s potential in improving mental health outcomes and preventing depression relapse. 展开更多
关键词 reinforcement learning Depression relapse prevention Personalized treatment Machine learning Mental health interventions
暂未订购
Multi-Agent Reinforcement Learning for Moving Target Defense Temporal Decision-Making Approach Based on Stackelberg-FlipIt Games
17
作者 Rongbo Sun Jinlong Fei +1 位作者 Yuefei Zhu Zhongyu Guo 《Computers, Materials & Continua》 2025年第8期3765-3786,共22页
Moving Target Defense(MTD)necessitates scientifically effective decision-making methodologies for defensive technology implementation.While most MTD decision studies focus on accurately identifying optimal strategies,... Moving Target Defense(MTD)necessitates scientifically effective decision-making methodologies for defensive technology implementation.While most MTD decision studies focus on accurately identifying optimal strategies,the issue of optimal defense timing remains underexplored.Current default approaches—periodic or overly frequent MTD triggers—lead to suboptimal trade-offs among system security,performance,and cost.The timing of MTD strategy activation critically impacts both defensive efficacy and operational overhead,yet existing frameworks inadequately address this temporal dimension.To bridge this gap,this paper proposes a Stackelberg-FlipIt game model that formalizes asymmetric cyber conflicts as alternating control over attack surfaces,thereby capturing the dynamic security state evolution of MTD systems.We introduce a belief factor to quantify information asymmetry during adversarial interactions,enhancing the precision of MTD trigger timing.Leveraging this game-theoretic foundation,we employMulti-Agent Reinforcement Learning(MARL)to derive adaptive temporal strategies,optimized via a novel four-dimensional reward function that holistically balances security,performance,cost,and timing.Experimental validation using IP addressmutation against scanning attacks demonstrates stable strategy convergence and accelerated defense response,significantly improving cybersecurity affordability and effectiveness. 展开更多
关键词 Cyber security moving target defense multi-agent reinforcement learning security metrics game theory
在线阅读 下载PDF
A sample selection mechanism for multi-UCAV air combat policy training using multi-agent reinforcement learning
18
作者 Zihui YAN Xiaolong LIANG +3 位作者 Yueqi HOU Aiwu YANG Jiaqiang ZHANG Ning WANG 《Chinese Journal of Aeronautics》 2025年第6期501-516,共16页
Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes... Policy training against diverse opponents remains a challenge when using Multi-Agent Reinforcement Learning(MARL)in multiple Unmanned Combat Aerial Vehicle(UCAV)air combat scenarios.In view of this,this paper proposes a novel Dominant and Non-dominant strategy sample selection(DoNot)mechanism and a Local Observation Enhanced Multi-Agent Proximal Policy Optimization(LOE-MAPPO)algorithm to train the multi-UCAV air combat policy and improve its generalization.Specifically,the LOE-MAPPO algorithm adopts a mixed state that concatenates the global state and individual agent's local observation to enable efficient value function learning in multi-UCAV air combat.The DoNot mechanism classifies opponents into dominant or non-dominant strategy opponents,and samples from easier to more challenging opponents to form an adaptive training curriculum.Empirical results demonstrate that the proposed LOE-MAPPO algorithm outperforms baseline MARL algorithms in multi-UCAV air combat scenarios,and the DoNot mechanism leads to stronger policy generalization when facing diverse opponents.The results pave the way for the fast generation of cooperative strategies for air combat agents with MARLalgorithms. 展开更多
关键词 Unmanned combat aerial vehicle Air combat Sample selection Multi-agent reinforcement learning Policyproximal optimization
原文传递
Intelligent Scheduling of Virtual Power Plants Based on Deep Reinforcement Learning
19
作者 Shaowei He Wenchao Cui +3 位作者 Gang Li Hairun Xu Xiang Chen Yu Tai 《Computers, Materials & Continua》 2025年第7期861-886,共26页
The Virtual Power Plant(VPP),as an innovative power management architecture,achieves flexible dispatch and resource optimization of power systems by integrating distributed energy resources.However,due to significant ... The Virtual Power Plant(VPP),as an innovative power management architecture,achieves flexible dispatch and resource optimization of power systems by integrating distributed energy resources.However,due to significant differences in operational costs and flexibility of various types of generation resources,as well as the volatility and uncertainty of renewable energy sources(such as wind and solar power)and the complex variability of load demand,the scheduling optimization of virtual power plants has become a critical issue that needs to be addressed.To solve this,this paper proposes an intelligent scheduling method for virtual power plants based on Deep Reinforcement Learning(DRL),utilizing Deep Q-Networks(DQN)for real-time optimization scheduling of dynamic peaking unit(DPU)and stable baseload unit(SBU)in the virtual power plant.By modeling the scheduling problem as a Markov Decision Process(MDP)and designing an optimization objective function that integrates both performance and cost,the scheduling efficiency and economic performance of the virtual power plant are significantly improved.Simulation results show that,compared with traditional scheduling methods and other deep reinforcement learning algorithms,the proposed method demonstrates significant advantages in key performance indicators:response time is shortened by up to 34%,task success rate is increased by up to 46%,and costs are reduced by approximately 26%.Experimental results verify the efficiency and scalability of the method under complex load environments and the volatility of renewable energy,providing strong technical support for the intelligent scheduling of virtual power plants. 展开更多
关键词 Deep reinforcement learning deep q-network virtual power plant lntelligent scheduling markov decision process
在线阅读 下载PDF
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
20
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
原文传递
上一页 1 2 57 下一页 到第
使用帮助 返回顶部