期刊文献+
共找到1,352篇文章
< 1 2 68 >
每页显示 20 50 100
Toward Collaborative and Adaptive Learning:A Survey of Multi-agent Reinforcement Learning in Education
1
作者 Sirine Bouguettaya Ouarda Zedadra +1 位作者 Francesco Pupo Giancarlo Fortino 《Artificial Intelligence Science and Engineering》 2026年第1期1-19,共19页
In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Mu... In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education. 展开更多
关键词 reinforcement learning multi-agent reinforcement learning Agentic AI EDUCATION generative AI
在线阅读 下载PDF
Beyond Wi-Fi 7:Enhanced Decentralized Wireless Local Area Networks with Federated Reinforcement Learning
2
作者 Rashid Ali Alaa Omran Almagrabi 《Computers, Materials & Continua》 2026年第3期391-409,共19页
Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning in... Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning into Wi-Fi networks remains challenging,especially in decentralized environments with multiple access points(mAPs).This paper is a short review that summarizes the potential applications of federated reinforcement learning(FRL)across eight key areas of Wi-Fi functionality,including channel access,link adaptation,beamforming,multi-user transmissions,channel bonding,multi-link operation,spatial reuse,and multi-basic servic set(multi-BSS)coordination.FRL is highlighted as a promising framework for enabling decentralized training and decision-making while preserving data privacy.To illustrate its role in practice,we present a case study on link activation in a multi-link operation(MLO)environment with multiple APs.Through theoretical discussion and simulation results,the study demonstrates how FRL can improve performance and reliability,paving the way for more adaptive and collaborative Wi-Fi networks in the era of Wi-Fi 7 and beyond. 展开更多
关键词 Artificial intelligence reinforcement learning channels selection wireless local area networks 802.11ax 802.11be WI-FI
在线阅读 下载PDF
Multi-agent reinforcement learning with layered autonomy and collaboration for enhanced collaborative confrontation
3
作者 Xiaoyu XING Haoxiang XIA 《Chinese Journal of Aeronautics》 2026年第2期370-388,共19页
Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making p... Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making problems,significantly enhancing swarm intelligence in maneuvering.However,applying MARL to unmanned swarms presents two primary challenges.First,defensive agents must balance autonomy with collaboration under limited perception while coordinating against adversaries.Second,current algorithms aim to maximize global or individual rewards,making them sensitive to fluctuations in enemy strategies and environmental changes,especially when rewards are sparse.To tackle these issues,we propose an algorithm of MultiAgent Reinforcement Learning with Layered Autonomy and Collaboration(MARL-LAC)for collaborative confrontations.This algorithm integrates dual twin Critics to mitigate the high variance associated with policy gradients.Furthermore,MARL-LAC employs layered autonomy and collaboration to address multi-objective problems,specifically learning a global reward function for the swarm alongside local reward functions for individual defensive agents.Experimental results demonstrate that MARL-LAC enhances decision-making and collaborative behaviors among agents,outperforming the existing algorithms and emphasizing the importance of layered autonomy and collaboration in multi-agent systems.The observed adversarial behaviors demonstrate that agents using MARL-LAC effectively maintain cohesive formations that conceal their intentions by confusing the offensive agent while successfully encircling the target. 展开更多
关键词 Attack-defense confrontation Collaborative confrontation Autonomous agents Multi-agent systems reinforcement learning Maneuvering decisionmaking
原文传递
A State-of-the-Art Survey of Adversarial Reinforcement Learning for IoT Intrusion Detection
4
作者 Qasem Abu Al-Haija Shahad Al Tamimi 《Computers, Materials & Continua》 2026年第4期26-94,共69页
Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Tr... Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation. 展开更多
关键词 reinforcement learning network intrusion detection adversarial training deep learning cybersecurity defense intrusion detection system and machine learning
在线阅读 下载PDF
An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities
5
作者 Vi Hoai Nam Chu Thi Minh Hue Dang Van Anh 《Computers, Materials & Continua》 2026年第1期2030-2044,共15页
Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top... Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments. 展开更多
关键词 UAV FANET smart cities reinforcement learning Q-learning
在线阅读 下载PDF
A Novel Evolutionary Optimized Transformer-Deep Reinforcement Learning Framework for False Data Injection Detection in Industry 4.0 Smart Water Infrastructures
6
作者 Ahmad Salehiyan Nuria Serrano +2 位作者 Francisco Hernando-Gallego Diego Martín José Vicenteálvarez-Bravo 《Computers, Materials & Continua》 2026年第5期1588-1624,共37页
The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection(FDI)attacks,posing critical threats to operational integrity,resource manag... The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection(FDI)attacks,posing critical threats to operational integrity,resource management,and public safety.Traditional detection mechanisms often struggle to generalize across heterogeneous environments or adapt to sophisticated,stealthy threats.To address these challenges,we propose a novel evolutionary optimized transformer-based deep reinforcement learning framework(Evo-Transformer-DRL)designed for robust and adaptive FDI detection in smart water infrastructures.The proposed architecture integrates three powerful paradigms:a transformer encoder for modeling complex temporal dependencies in multivariate time series,a DRL agent for learning optimal decision policies in dynamic environments,and an evolutionary optimizer to fine-tune model hyper-parameters.This synergy enhances detection performance while maintaining adaptability across varying data distributions.Specifically,hyper-parameters of both the transformer and DRL modules are optimized using an improved grey wolf optimizer(IGWO),ensuring a balanced trade-off between detection accuracy and computational efficiency.The model is trained and evaluated on three realistic Industry 4.0 water datasets:secure water treatment(SWaT),water distribution(WADI),and battle of the attack detection algorithms(BATADAL),which capture diverse attack scenarios in smart treatment and distribution systems.Comparative analysis against state-of-the-art baselines including Transformer,DRL,bidirectional encoder representations from transformers(BERT),convolutional neural network(CNN),long short-term memory(LSTM),and support vector machines(SVM)demonstrates that our proposed Evo-Transformer-DRL framework consistently outperforms others in key metrics such as accuracy,recall,area under the curve(AUC),and execution time.Notably,it achieves a maximum detection accuracy of 99.19%,highlighting its strong generalization capability across different testbeds.These results confirm the suitability of our hybrid framework for real-world Industry 4.0 deployment,where rapid adaptation,scalability,and reliability are paramount for securing critical infrastructure systems. 展开更多
关键词 Industry 4.0 smart water systems false data injection detection cyber-physical security TRANSFORMER deep reinforcement learning grey wolf optimizer
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
7
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
Safe Deep Reinforcement Learning for Real-time AC Optimal Power Flow:A Near-optimal Solution
8
作者 Bin Feng Jiayue Zhao +4 位作者 Gang Huang Yijie Hu Huating Xu Changxin Guo Zhe Chen 《CSEE Journal of Power and Energy Systems》 2026年第1期99-111,共13页
The real-time AC optimal power flow(OPF)problem is a key issue in making fast and accurate decisions to ensure the safety and economy of power systems.With the rapid development of renewable energies,the fluctuation h... The real-time AC optimal power flow(OPF)problem is a key issue in making fast and accurate decisions to ensure the safety and economy of power systems.With the rapid development of renewable energies,the fluctuation has grown more vibrant,thus a novel approach called safe deep reinforcement learning is proposed in this paper.Herein,the real-time ACOPF problem is modeled as a constrained Markov decision process,and primal-dual optimization(PDO)based proximal policy optimization(PPO)is used to learn the optimal generator outputs in the primal domain and security constraints in the dual domain,which avoids manually selecting a trade-off between penalties for constraint violations and rewards for the economy.Before training,behavior cloning clones the expert experience into the initial weights of neural networks.Moreover,multiprocessing training is utilized to accelerate the training speed.Case studies are conducted on the IEEE 118-bus system and the modified IEEE 118-bus system.Compared with other methods,the experimental results show that the proposed method can achieve security and near-optimal economic goals by fast calculating the real-time ACOPF problem. 展开更多
关键词 Behavior cloning deep reinforcement learning multiprocessing training optimal power flow primal-dual optimization proximal policy optimization
原文传递
Ride-hailing Electric Vehicle Dispatching for Resilience Reserve Enhancement:An Interactive Deep Reinforcement Learning Approach
9
作者 Ran Tao Dongmei Zhao +2 位作者 Haoxiang Wang Yinghui Wang Xuan Xia 《CSEE Journal of Power and Energy Systems》 2026年第1期448-465,共18页
Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the po... Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the power grid dispatching center and the ride-hailing platform.Effective pre-strategies can improve the prevention ability for high-impact and low-probability(HILP)events and provide the foundation for measures in the response and restoration stages.First,this paper proposes a resilience reserve to expand the existing research on power system resilience.Secondly,this paper puts forward an interactive method of deep reinforcement learning,which considers the interests of both the power grid dispatching center and the ride-hailing platform.It improves the resilience reserve by achieving the order dispatch,orderly charging management of ride-hailing electric vehicles,and the pricing strategy of charging stations.Finally,this paper uses a practical example covering about 107.32 km2 in the center of Chengdu to verify that the proposed method improves the resilience reserve of the power system without obviously damaging the interests of the ride-hailing platform. 展开更多
关键词 Charging scheduling electric vehicle power system resilience reinforcement learning ride-hailing
原文传递
Energy Optimization for Autonomous Mobile Robot Path Planning Based on Deep Reinforcement Learning
10
作者 Longfei Gao Weidong Wang Dieyun Ke 《Computers, Materials & Continua》 2026年第1期984-998,共15页
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ... At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems. 展开更多
关键词 Autonomous mobile robot deep reinforcement learning energy optimization multi-attention mechanism prioritized experience replay dueling deep Q-Network
在线阅读 下载PDF
Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies
11
作者 Shaoping Xiao Zhaoan Wang +3 位作者 Junchao Li Caden Noeller Jiefeng Jiang Jun Wang 《Computers, Materials & Continua》 2026年第2期1-62,共62页
Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunitie... Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunities and challenges.As RL continues to demonstrate remarkable success in model-free and partially observable environments,its real-world deployment increasingly requires effective collaboration with human operators and stakeholders.This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies.We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration:computational trust modeling,system usability,and decision understandability.Our comprehensive review organizes HAII methods into five key categories:(1)learning from human feedback,including various shaping approaches;(2)learning from human demonstration through inverse RL and imitation learning;(3)shared autonomy architectures for dynamic control allocation;(4)human-in-the-loop querying strategies for active learning;and(5)explainable RL techniques for interpretable policy generation.Recent state-of-the-art works are critically reviewed,with particular emphasis on advances incorporating large language models in human-AI interaction research.To illustrate some concepts,we present three detailed case studies:an empirical trust model for farmers adopting AI-driven agricultural management systems,the implementation of ethical constraints in roboticmotion planning through human-guided RL,and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm.These applications demonstrate how HAII principles can enhance RL systems’practical utility while bridging the gap between theoretical RL and real-world human-centered applications,ultimately contributing to more deployable and socially beneficial intelligent systems. 展开更多
关键词 Human-AI interaction reinforcement learning partially observable environments trust model ethical constraints
在线阅读 下载PDF
A Regional Distribution Network Coordinated Optimization Strategy for Electric Vehicle Clusters Based on Parametric Deep Reinforcement Learning
12
作者 Lei Su Wanli Feng +4 位作者 Cao Kan Mingjiang Wei Jihai Wang Pan Yu Lingxiao Yang 《Energy Engineering》 2026年第3期195-214,共20页
To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources(DERs)(such as photovoltaic(PV)systems,wind turbines(WT),and energy s... To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources(DERs)(such as photovoltaic(PV)systems,wind turbines(WT),and energy storage(ES)devices),and the increased grid load fluctuations and safety risks due to uncoordinated electric vehicles(EVs)charging,this paper proposes a novel dual-scale hierarchical collaborative optimization strategy.This strategy decouples system-level economic dispatch from distributed EV agent control,effectively solving the resource coordination conflicts arising from the high computational complexity,poor scalability of existing centralized optimization,or the reliance on local information decision-making in fully decentralized frameworks.At the lower level,an EV charging and discharging model with a hybrid discrete-continuous action space is established,and optimized using an improved Parameterized Deep Q-Network(PDQN)algorithm,which directly handles mode selection and power regulation while embedding physical constraints to ensure safety.At the upper level,microgrid(MG)operators adopt a dynamic pricing strategy optimized through Deep Reinforcement Learning(DRL)to maximize economic benefits and achieve peak-valley shaving.Simulation results show that the proposed strategy outperforms traditional methods,reducing the total operating cost of the MG by 21.6%,decreasing the peak-to-valley load difference by 33.7%,reducing the number of voltage limit violations by 88.9%,and lowering the average electricity cost for EV users by 15.2%.This method brings a win-win result for operators and users,providing a reliable and efficient scheduling solution for distribution networks with high renewable energy penetration rates. 展开更多
关键词 Power system regional distributed energy electric vehicle deep reinforcement learning collaborative optimization
在线阅读 下载PDF
A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration
13
作者 Changcheng Li Weimeng Chang +1 位作者 Dahai Zhang Jinghan He 《Energy Engineering》 2026年第1期243-264,共22页
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision... Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training. 展开更多
关键词 Partitioning method parallel restoration deep reinforcement learning experience replay buffer partitioning modularity
在线阅读 下载PDF
Enhanced multi-agent deep reinforcement learning for efficient task offloading and resource allocation in vehicular networks
14
作者 Long Xu Jiale Tan Hongcheng Zhuang 《Digital Communications and Networks》 2026年第1期66-75,共10页
In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging ... In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging an enhanced Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm with an attention mechanism,the proposed approach optimizes computation offloading and resource allocation,aiming to minimize energy consumption and service delay.In this paper,vehicles dynamically offload computing-intensive tasks to both nearby vehicles through V2V links and roadside units through V2I links.The adaptive attention mechanism enables the system to prioritize relevant state information,leading to faster convergence.Simulations conducted in a realistic urban V2X scenario demonstrate that the proposed Attention-enhanced MADDPG(AT-MADDPG)algorithm significantly improves performance,achieving notable reductions in both energy consumption and latency compared to baseline algorithms,especially in high-demand,dynamic scenarios. 展开更多
关键词 Computation offloading Vehicular networks Deep reinforcement learning Adaptive offloading Spectrum and power allocation
在线阅读 下载PDF
Robust Voltage Control for Active Distribution Networks via Safe Deep Reinforcement Learning Against State Perturbations
15
作者 Meng Tian Xiaoxu Li +3 位作者 Ziyang Zhu Zhengcheng Dong Li Gong Jingang Lai 《Protection and Control of Modern Power Systems》 2026年第1期192-207,共16页
With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage... With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage control methods often face a trade-off between efficiency and effectiveness,and rarely ensure robust voltage safety under typical state perturbations in practical distribution grids.In this paper,a robust model-free voltage regulation approach is proposed which simultaneously takes security and robustness into account.In this context,the voltage control problem is formulated as a constrained Markov decision process(CMDP).A safety-augmented multiagent deep deterministic policy gradient(MADDPG)algorithm is the trained to enable real-time collaborative optimization of ADNs,aiming to maintain nodal voltages within safe operational limits while minimizing total line losses.Moreover,a robust regulation loss is introduced to ensure reliable performance under various state perturbations in practical voltage controls.The proposed regulation algorithm effectively balance efficiency,safety,and robustness,and also demonstrates potential for generalizing these characteristics to other applications.Numerical studies vali-date the robustness of the proposed method under varying state perturbations on the IEEE test cases and the optimal integrated control performance when compared to other benchmarks. 展开更多
关键词 Active distribution network robust voltage control state perturbation model-free safe deep reinforcement learning
在线阅读 下载PDF
Deep Reinforcement Learning for Competitive DER Pricing Problem of Virtual Power Plants
16
作者 Zheng Xu Ye Guo +2 位作者 Hongbin Sun Wenjun Tang Wenqi Huang 《CSEE Journal of Power and Energy Systems》 2026年第1期150-161,共12页
Pricing competition between virtual power plants(VPPs)for distributed energy resources(DERs)is considered in this paper.Due to limited amount of DERs in one distributed area,VPPs have to compete for the rights to work... Pricing competition between virtual power plants(VPPs)for distributed energy resources(DERs)is considered in this paper.Due to limited amount of DERs in one distributed area,VPPs have to compete for the rights to work with DERs and then sell electricity from internal DERs in the wholesale market.To address this pricing problem,a Markov decision process(MDP)with continuous state and action spaces is formulated for the VPP to consider future rewards brought by contract statuses of DERs.Deep deterministic policy gradient(DDPG)algorithm is applied to solve the pricing problem in MDP form.To deal with the non-stationary environment in the training process brought by competing VPP,a fictitious adversary method is put forward in this paper to combine with DDPG algorithm for the first time.The proposed fictitious adversary method can help the VPP in finding competitive and robust pricing strategies under competition.Numerical results demonstrate effectiveness of the proposed methodology in finding satisfying pricing strategies that consider competitor behavior and long-term values of DERs. 展开更多
关键词 Deep deterministic policy gradient distributed energy resources electricity markets reinforcement learning virtual power plants
原文传递
Safety-Aware Reinforcement Learning for Self-Healing Intrusion Detection in 5G-Enabled IoT Networks
17
作者 Wajdan Al Malwi Fatima Asiri +3 位作者 Nazik Alturki Noha Alnazzawi Dimitrios Kasimatis Nikolaos Pitropakis 《Computers, Materials & Continua》 2026年第5期2020-2042,共23页
The expansion of 5G-enabled Internet of Things(IoT)networks,while enabling transformative applications,significantly increases the attack surface and necessitates security solutions that extend beyond traditional intr... The expansion of 5G-enabled Internet of Things(IoT)networks,while enabling transformative applications,significantly increases the attack surface and necessitates security solutions that extend beyond traditional intrusion detection.Existing intrusion detection systems(IDSs)mainly operate in an open-loop manner,excelling at classification but lacking the ability for autonomous,safety-aware remediation.This gap is particularly critical in 5G environments,where manual intervention is too slow and naive automation can lead to severe service disruptions.To address this issue,we propose a novel Self-Healing Intrusion Detection System(SH-IDS)framework that develops a closed-loop cyber defense mechanism.The main technical contribution is the integration of a deep neural networkbased threat detector,which offers uncertainty-quantified predictions,with a safety-aware reinforcement learning(RL)engine formulated as a Constrained Markov Decision Process(CMDP).The CMDP explicitly models operational safety as cost constraints,and a new runtime safety shield actively adjusts any unsafe action proposed by the RL agent to the nearest safe alternative,ensuring operational integrity.Additionally,we introduce a composite utility function for the comprehensive evaluation of the system.Empirical analysis on the 5G-NIDD dataset demonstrates the superior performance of our framework:the detector achieves 98.26%accuracy,while the safe RL agent learns effective mitigation policies.Importantly,the safety shield blocked up to 70 unsafe actions under strict constraints,and analysis of the learned Q-tables confirms that the agent internalizes safety,avoiding overly disruptive actions,such as isolating nodes for minor threats.The system also maintains high efficiency with a compact model size of 121.7 KB and sub-millisecond latency,confirming its practical deployability for real-time 5G-IoT security. 展开更多
关键词 CYBERSECURITY internet of things intrusion detection 5G/6G security reinforcement learning
在线阅读 下载PDF
Adaptive optimal tracking control for underactuated surface vessels using extended state observer and reinforcement learning
18
作者 Yinkun Li Yawen Zhou +1 位作者 Yufeng Zhou Li Chai 《Journal of Automation and Intelligence》 2026年第1期24-34,共11页
This paper investigates the adaptive optimal tracking control(AOTC)for underactuated surface vessels(USVs).Compared to the majority of existing studies,the control strategy in this paper innovatively combines an exten... This paper investigates the adaptive optimal tracking control(AOTC)for underactuated surface vessels(USVs).Compared to the majority of existing studies,the control strategy in this paper innovatively combines an extended state observer(ESO)with reinforcement learning(RL).The designed ESO has high estimation accuracy and robust disturbance rejection capabilities for the unmeasurable information for USVs.To obtain the AOTC,the actor–critic(AC)networks based on RL are constructed to solve the Hamilton–Jacobi–Bellman(HJB)equations.Due to the uncertainties,it is challenging to obtain the optimal controller by directly solving the HJB equations.To address this issue,this paper employs neural networks(NNs)to approximate the uncertainties and solves the optimal controller via AC-RL and ESO.In addition,the adaptive parameters of the optimal controller is trained in parallel with AC networks,which can ensure that the trained networks can further improve tracking performance.The boundedness of AOTC for USVs is shown by Lyapunov stability theorem.Finally,simulation results demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Extended state observer Actor–critic networks reinforcement learning Backstepping method Underactuated surface vessel
在线阅读 下载PDF
Peer-to-Peer Energy Trading for Multi-microgrids via Stackelberg Game and Multi-agent Deep Reinforcement Learning
19
作者 Pengjie Zhao Junyong Wu +3 位作者 Fashun Shi Lusu Li Baoqing Li Yi Wang 《CSEE Journal of Power and Energy Systems》 2026年第1期187-199,共13页
This paper proposes a novel framework based on the Stackelberg game and deep reinforcement learning for multi-microgrids(MGs)in achieving peer-to-peer(P2P)energy trading.A multi-leaders,multi-followers Stackelberg gam... This paper proposes a novel framework based on the Stackelberg game and deep reinforcement learning for multi-microgrids(MGs)in achieving peer-to-peer(P2P)energy trading.A multi-leaders,multi-followers Stackelberg game is utilized to model the P2P energy trading process.Stackelberg equilibrium(SE)is regarded as a P2P optimal trading strategy.A two-stage privacy protection solution technique combining data-driven and model-driven is developed to obtain the SE.Specifically,energy storage scheduling problem in MGs is formulated as a Markov decision process with discrete periods,and a multi-action single-observation deep deterministic policy gradient(MASO-DDPG)algorithm is proposed to tackle optimal scheduling of energy storage in the first stage.According to optimal scheduling of energy storage,the closed-form expression for SE based on model-driven is derived,and distributed SE solution technique(DSET)is developed to obtain SE in the second stage.Case studies involving a 4-Microgrid demonstrate the P2P electricity price obtained by the two-stage method,as a novel pricing mechanism,can reasonably regulate microgrid operation mode and improve microgrid income participating in the P2P market,which verifies effectiveness and superiority of the proposed P2P energy trading model and two-stage solution method. 展开更多
关键词 Deep reinforcement learning markov decision process MICROGRID peer-to-peer(P2P) stackelberg equilibrium
原文传递
Deep reinforcement learning-based adaptive collision avoidance method for UAV in joint operational airspace
20
作者 Yan Shen Xuejun Zhang +1 位作者 Yan Li Weidong Zhang 《Defence Technology(防务技术)》 2026年第2期142-159,共18页
As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,t... As joint operations have become a key trend in modern military development,unmanned aerial vehicles(UAVs)play an increasingly important role in enhancing the intelligence and responsiveness of combat systems.However,the heterogeneity of aircraft,partial observability,and dynamic uncertainty in operational airspace pose significant challenges to autonomous collision avoidance using traditional methods.To address these issues,this paper proposes an adaptive collision avoidance approach for UAVs based on deep reinforcement learning.First,a unified uncertainty model incorporating dynamic wind fields is constructed to capture the complexity of joint operational environments.Then,to effectively handle the heterogeneity between manned and unmanned aircraft and the limitations of dynamic observations,a sector-based partial observation mechanism is designed.A Dynamic Threat Prioritization Assessment algorithm is also proposed to evaluate potential collision threats from multiple dimensions,including time to closest approach,minimum separation distance,and aircraft type.Furthermore,a Hierarchical Prioritized Experience Replay(HPER)mechanism is introduced,which classifies experience samples into high,medium,and low priority levels to preferentially sample critical experiences,thereby improving learning efficiency and accelerating policy convergence.Simulation results show that the proposed HPER-D3QN algorithm outperforms existing methods in terms of learning speed,environmental adaptability,and robustness,significantly enhancing collision avoidance performance and convergence rate.Finally,transfer experiments on a high-fidelity battlefield airspace simulation platform validate the proposed method's deployment potential and practical applicability in complex,real-world joint operational scenarios. 展开更多
关键词 Unmanned aerial vehicle Collision avoidance Deep reinforcement learning Joint operational airspace Hierarchical prioritized experience replay
在线阅读 下载PDF
上一页 1 2 68 下一页 到第
使用帮助 返回顶部