期刊文献+
共找到82,403篇文章
< 1 2 250 >
每页显示 20 50 100
Toward Collaborative and Adaptive Learning:A Survey of Multi-agent Reinforcement Learning in Education
1
作者 Sirine Bouguettaya Ouarda Zedadra +1 位作者 Francesco Pupo Giancarlo Fortino 《Artificial Intelligence Science and Engineering》 2026年第1期1-19,共19页
In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Mu... In recent years,researchers have leveraged single-agent reinforcement learning to boost educational outcomes and deliver personalized interventions;yet this paradigm provides no capacity for inter-agent interaction.Multi-agent reinforcement learning(MARL)overcomes this limitation by allowing several agents to learn simultaneously within a shared environment,each choosing actions that maximize its own or the group's rewards.By explicitly modeling and exploiting agent-to-agent dynamics,MARL can align those interactions with pedagogical goals such as peer tutoring,collaborative problem-solving,or gamified competition,thus opening richer avenues for adaptive and socially informed learning experiences.This survey investigates the impact of MARL on educational outcomes by examining evidence of its effectiveness in enhancing learner performance,engagement,equity,and reducing teacher workload compared to single agent or traditional approaches.It explores the educational domains and pedagogical problems addressed by MARL,identifies the algorithmic families used,and analyzes their influence on learning.The review also assesses experimental settings and evaluation metrics to determine ecological validity,and outlines current challenges and future research directions in applying MARL to education. 展开更多
关键词 reinforcement learning multi-agent reinforcement learning Agentic AI EDUCATION generative AI
在线阅读 下载PDF
An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities
2
作者 Vi Hoai Nam Chu Thi Minh Hue Dang Van Anh 《Computers, Materials & Continua》 2026年第1期2030-2044,共15页
Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top... Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments. 展开更多
关键词 UAV FANET smart cities reinforcement learning Q-LEARNING
在线阅读 下载PDF
Reinforcement learning for muon scattering tomography enhancement
3
作者 Yi-Ni Wu Yuan-Yuan Liu +7 位作者 Li Wang Jian-Jie Zhang Ning Su Wen-Wan Ding Xin Zhao Zhi Zhou Peng Zheng Jian-Ping Cheng 《Nuclear Science and Techniques》 2026年第5期182-198,共17页
Muon scattering tomography(MST) is a powerful noninvasive imaging technique with significant applications in nuclear material detection and security screening.Traditional MST usually relies on the point of closest app... Muon scattering tomography(MST) is a powerful noninvasive imaging technique with significant applications in nuclear material detection and security screening.Traditional MST usually relies on the point of closest approach(PoCA) algorithm to reconstruct images from muon scattering data;however,PoCA often suffers from suboptimal image clarity and resolution.To overcome these challenges,we propose a novel approach that leverages reinforcement learning(RL) to enhance MST reconstruction,termed the μRL-enhanced method.By framing the MST optimization task as an RL problem,we developed an intelligent agent capable of dynamically adjusting the key PoCA parameters.The agent is trained using a multi-objective reward function that guides the optimization toward higher-quality reconstructions.Our experimental results show that theμRL-enhanced method significantly outperforms the traditional PoCA baseline acros s multiple benchmark metrics.Specifically,the proposed approach on average attains a 307% improvement in the intersection over union(IoU),a 79% increase in the structural similarity index measure(SSIM),and a 8.4% enhancement in the peak signal-to-noise ratio(PSNR) across four experiments.Furthermore,when benchmarked against the maximum likelihood scattering and displacement(MLSD)algorithm,the μRL-enhanced method offers modest gains in PS NR and IoU,together with a one-third increase in SSIM.These improvements demonstrate the enhanced reconstruction accuracy and structural fidelity of the μRL-enhanced method,highlighting its potential to advance MST technologies and their applications. 展开更多
关键词 Muon scattering tomography reinforcement learning Q-LEARNING PoCA
在线阅读 下载PDF
Computer Modeling of Pipeline Repair Reinforcement with Composite Bandages
4
作者 Maria Tanase Gennadiy Lvov 《Computer Modeling in Engineering & Sciences》 2026年第2期296-315,共20页
The increasing occurrence of corrosion-related damage in steel pipelines has led to the growing use of composite-based repair techniques as an efficient alternative to traditional replacement methods.Computer modeling... The increasing occurrence of corrosion-related damage in steel pipelines has led to the growing use of composite-based repair techniques as an efficient alternative to traditional replacement methods.Computer modeling and structural analysis were performed for the repair reinforcement of a steel pipeline with a composite bandage.A preliminary analysis of possible contact interaction schemes was implemented based on the theory of cylindrical shells,taking into account transverse shear deformations.The finite element method was used for a detailed study of the stress state of the composite bandage and the reinforced section of the pipeline.The limit state of the reinforced section was assessed based on the von Mises criterion for steel and the Tsai-Wu criterion for composites.The effectiveness of the repair was demonstrated on a pipeline whose wall thickness had decreased by 20%as a result of corrosion damage.At a nominal pressure of P=6 MPa,the maximum normal stress in the weakened area reached 381 MPa.The installation of a composite bandage reduced this stress to 312 MPa,making the repaired section virtually as strong as the undamaged pipeline.Due to the linearity of the problem,the results obtained can be easily used to find critical internal pressure values. 展开更多
关键词 Numerical analysis pipeline repair reinforcement composite bandages
在线阅读 下载PDF
GRA:Graph-based reward aggregation for cooperative multi-agent reinforcement learning
5
作者 Jingcheng Tang Peng Zhou +1 位作者 He Bai Gangshan Jing 《Journal of Automation and Intelligence》 2026年第1期46-56,共11页
Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused ... Multi-agent reinforcement learning(MARL)has proven its effectiveness in cooperative multi-agent systems(MASs)but still faces issues on the curse of dimensionality and learning efficiency.The main difficulty is caused by the strong inter-agent coupling nature embedded in an MARL problem,which is yet to be fully exploited in existing algorithms.In this work,we recognize a learning graph characterizing the dependence between individual rewards and individual policies.Then we propose a graph-based reward aggregation(GRA)method,which utilizes the inherent coupling relationship among agents to eliminate redundant information.Specifically,GRA passes information among cooperating agents through graph attention networks to obtain aggregated rewards that contribute to the fitting of the value function,making each agent learn a decentralized executable cooperation policy.In addition,we propose a variant of GRA,named GRA-decen,which achieves decentralized training and decentralized execution(DTDE)when each agent only has access to information of partial agents in the learning process.We conduct experiments in different environments and demonstrate the practicality and scalability of our algorithms. 展开更多
关键词 Networked system Multi-agent reinforcement learning Graph-based RL
在线阅读 下载PDF
Robust Reinforcement Learning:Methods,Benchmarks and Challenges
6
作者 Jinlei Gu Mengchu Zhou +1 位作者 Xiwang Guo Yebin Wang 《Artificial Intelligence Science and Engineering》 2026年第1期20-35,共16页
Reinforcement learning(RL),as an important branch of machine learning,has recently achieved extensive attention and success in many applications.Its main idea is to enable agents to continuously learn to make optimal ... Reinforcement learning(RL),as an important branch of machine learning,has recently achieved extensive attention and success in many applications.Its main idea is to enable agents to continuously learn to make optimal decisions by trying to maximize a reward function for their actions and interactions with the environment.However,making highquality decisions in complex and uncertain real-world scenarios is a challenging task.The interference and attacks in such scenarios tend to destroy the existing strategies.Maintaining RL's optimal performance in various cases and adapting to changing environments remains an important challenge.This article presents a comprehensive review of recent advancements in robust reinforcement learning(RRL),and analyzes them from the perspectives of challenges,methodologies,and applications.It systematically evaluates current progress in RRL and summarizes the commonly used benchmark platforms.Finally,several open challenges are discussed to stimulate further research and guide future developments in this area. 展开更多
关键词 robust reinforcement learning robust enhancement environment randomization adversarial training
在线阅读 下载PDF
Research on UAV-MEC Cooperative Scheduling Algorithms Based on Multi-Agent Deep Reinforcement Learning
7
作者 Yonghua Huo Ying Liu +1 位作者 Anni Jiang Yang Yang 《Computers, Materials & Continua》 2026年第3期1823-1850,共28页
With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier... With the advent of sixth-generation mobile communications(6G),space-air-ground integrated networks have become mainstream.This paper focuses on collaborative scheduling for mobile edge computing(MEC)under a three-tier heterogeneous architecture composed of mobile devices,unmanned aerial vehicles(UAVs),and macro base stations(BSs).This scenario typically faces fast channel fading,dynamic computational loads,and energy constraints,whereas classical queuing-theoretic or convex-optimization approaches struggle to yield robust solutions in highly dynamic settings.To address this issue,we formulate a multi-agent Markov decision process(MDP)for an air-ground-fused MEC system,unify link selection,bandwidth/power allocation,and task offloading into a continuous action space and propose a joint scheduling strategy that is based on an improved MATD3 algorithm.The improvements include Alternating Layer Normalization(ALN)in the actor to suppress gradient variance,Residual Orthogonalization(RO)in the critic to reduce the correlation between the twin Q-value estimates,and a dynamic-temperature reward to enable adaptive trade-offs during training.On a multi-user,dual-link simulation platform,we conduct ablation and baseline comparisons.The results reveal that the proposed method has better convergence and stability.Compared with MADDPG,TD3,and DSAC,our algorithm achieves more robust performance across key metrics. 展开更多
关键词 UAV-MEC networks multi-agent deep reinforcement learning MATD3 task offloading
在线阅读 下载PDF
Ride-hailing Electric Vehicle Dispatching for Resilience Reserve Enhancement:An Interactive Deep Reinforcement Learning Approach
8
作者 Ran Tao Dongmei Zhao +2 位作者 Haoxiang Wang Yinghui Wang Xuan Xia 《CSEE Journal of Power and Energy Systems》 2026年第1期448-465,共18页
Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the po... Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the power grid dispatching center and the ride-hailing platform.Effective pre-strategies can improve the prevention ability for high-impact and low-probability(HILP)events and provide the foundation for measures in the response and restoration stages.First,this paper proposes a resilience reserve to expand the existing research on power system resilience.Secondly,this paper puts forward an interactive method of deep reinforcement learning,which considers the interests of both the power grid dispatching center and the ride-hailing platform.It improves the resilience reserve by achieving the order dispatch,orderly charging management of ride-hailing electric vehicles,and the pricing strategy of charging stations.Finally,this paper uses a practical example covering about 107.32 km2 in the center of Chengdu to verify that the proposed method improves the resilience reserve of the power system without obviously damaging the interests of the ride-hailing platform. 展开更多
关键词 Charging scheduling electric vehicle power system resilience reinforcement learning ride-hailing
原文传递
Implementation of Human-AI Interaction in Reinforcement Learning: Literature Review and Case Studies
9
作者 Shaoping Xiao Zhaoan Wang +3 位作者 Junchao Li Caden Noeller Jiefeng Jiang Jun Wang 《Computers, Materials & Continua》 2026年第2期1-62,共62页
Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunitie... Theintegration of human factors into artificial intelligence(AI)systems has emerged as a critical research frontier,particularly in reinforcement learning(RL),where human-AI interaction(HAII)presents both opportunities and challenges.As RL continues to demonstrate remarkable success in model-free and partially observable environments,its real-world deployment increasingly requires effective collaboration with human operators and stakeholders.This article systematically examines HAII techniques in RL through both theoretical analysis and practical case studies.We establish a conceptual framework built upon three fundamental pillars of effective human-AI collaboration:computational trust modeling,system usability,and decision understandability.Our comprehensive review organizes HAII methods into five key categories:(1)learning from human feedback,including various shaping approaches;(2)learning from human demonstration through inverse RL and imitation learning;(3)shared autonomy architectures for dynamic control allocation;(4)human-in-the-loop querying strategies for active learning;and(5)explainable RL techniques for interpretable policy generation.Recent state-of-the-art works are critically reviewed,with particular emphasis on advances incorporating large language models in human-AI interaction research.To illustrate some concepts,we present three detailed case studies:an empirical trust model for farmers adopting AI-driven agricultural management systems,the implementation of ethical constraints in roboticmotion planning through human-guided RL,and an experimental investigation of human trust dynamics using a multi-armed bandit paradigm.These applications demonstrate how HAII principles can enhance RL systems’practical utility while bridging the gap between theoretical RL and real-world human-centered applications,ultimately contributing to more deployable and socially beneficial intelligent systems. 展开更多
关键词 Human-AI interaction reinforcement learning partially observable environments trust model ethical constraints
在线阅读 下载PDF
Evaluation of Reinforcement Learning-Based Adaptive Modulation in Shallow Sea Acoustic Communication
10
作者 Yifan Qiu Xiaoyu Yang +1 位作者 Feng Tong Dongsheng Chen 《哈尔滨工程大学学报(英文版)》 2026年第1期292-299,共8页
While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance re... While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies. 展开更多
关键词 Adaptive modulation Shallow sea underwater acoustic modulation reinforcement learning
在线阅读 下载PDF
A Regional Distribution Network Coordinated Optimization Strategy for Electric Vehicle Clusters Based on Parametric Deep Reinforcement Learning
11
作者 Lei Su Wanli Feng +4 位作者 Cao Kan Mingjiang Wei Jihai Wang Pan Yu Lingxiao Yang 《Energy Engineering》 2026年第3期195-214,共20页
To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources(DERs)(such as photovoltaic(PV)systems,wind turbines(WT),and energy s... To address the high costs and operational instability of distribution networks caused by the large-scale integration of distributed energy resources(DERs)(such as photovoltaic(PV)systems,wind turbines(WT),and energy storage(ES)devices),and the increased grid load fluctuations and safety risks due to uncoordinated electric vehicles(EVs)charging,this paper proposes a novel dual-scale hierarchical collaborative optimization strategy.This strategy decouples system-level economic dispatch from distributed EV agent control,effectively solving the resource coordination conflicts arising from the high computational complexity,poor scalability of existing centralized optimization,or the reliance on local information decision-making in fully decentralized frameworks.At the lower level,an EV charging and discharging model with a hybrid discrete-continuous action space is established,and optimized using an improved Parameterized Deep Q-Network(PDQN)algorithm,which directly handles mode selection and power regulation while embedding physical constraints to ensure safety.At the upper level,microgrid(MG)operators adopt a dynamic pricing strategy optimized through Deep Reinforcement Learning(DRL)to maximize economic benefits and achieve peak-valley shaving.Simulation results show that the proposed strategy outperforms traditional methods,reducing the total operating cost of the MG by 21.6%,decreasing the peak-to-valley load difference by 33.7%,reducing the number of voltage limit violations by 88.9%,and lowering the average electricity cost for EV users by 15.2%.This method brings a win-win result for operators and users,providing a reliable and efficient scheduling solution for distribution networks with high renewable energy penetration rates. 展开更多
关键词 Power system regional distributed energy electric vehicle deep reinforcement learning collaborative optimization
在线阅读 下载PDF
A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration
12
作者 Changcheng Li Weimeng Chang +1 位作者 Dahai Zhang Jinghan He 《Energy Engineering》 2026年第1期243-264,共22页
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision... Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training. 展开更多
关键词 Partitioning method parallel restoration deep reinforcement learning experience replay buffer partitioning modularity
在线阅读 下载PDF
Reinforcement learning based intelligent fault-tolerant assistance control for air-breathing hypersonic vehicles
13
作者 Yi DENG Liguo SUN +2 位作者 Yonghao PAN Jiayi YAN Yuanji LIU 《Chinese Journal of Aeronautics》 2026年第3期584-600,共17页
This paper proposes a novel reinforcement-learning-based intelligent fault-tolerant assistance control framework for Air-breathing Hypersonic Vehicles(AHVs).Considering that Reinforcement Learning(RL)has the advantage... This paper proposes a novel reinforcement-learning-based intelligent fault-tolerant assistance control framework for Air-breathing Hypersonic Vehicles(AHVs).Considering that Reinforcement Learning(RL)has the advantage of exploring approximate optimal strategies,an RL-based assistance controller parallel to the fundamental controller is introduced to generate the assistance control signal.Specifically,the Incremental model-based Dual Heuristic Programming(IDHP)method is adopted to design the RL-based assistance control law.In order to extend the IDHP method to the assistance control scenario,a novel linear time-varying incremental model of the closed-loop augmented system is constructed and identified in real time,which consists of the AHV plant,the fundamental controller,and the command generator.The RL agent continuously updates its neural-network weights according to the real-time identification information,and adjusts its control policy,i.e.,the assistance control signal,after detecting sudden model changes.Simulation results have validated the effectiveness of the proposed intelligent fault-tolerant control scheme under various types of elevator faults and aerodynamic/configuration parameter uncertainties.The fault-tolerant ability of the whole control system with the proposed RL-based assistance controller is validated in both inner-loop attitude and outer-loop altitude tracking tasks. 展开更多
关键词 Hypersonic vehicles Fault-tolerant control reinforcement learning Heuristic programming:Online learning
原文传递
Multi-agent reinforcement learning with layered autonomy and collaboration for enhanced collaborative confrontation
14
作者 Xiaoyu XING Haoxiang XIA 《Chinese Journal of Aeronautics》 2026年第2期370-388,共19页
Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making p... Addressing optimal confrontation methods in multi-agent attack-defense scenarios is a complex challenge.Multi-Agent Reinforcement Learning(MARL)provides an effective framework for tackling sequential decision-making problems,significantly enhancing swarm intelligence in maneuvering.However,applying MARL to unmanned swarms presents two primary challenges.First,defensive agents must balance autonomy with collaboration under limited perception while coordinating against adversaries.Second,current algorithms aim to maximize global or individual rewards,making them sensitive to fluctuations in enemy strategies and environmental changes,especially when rewards are sparse.To tackle these issues,we propose an algorithm of MultiAgent Reinforcement Learning with Layered Autonomy and Collaboration(MARL-LAC)for collaborative confrontations.This algorithm integrates dual twin Critics to mitigate the high variance associated with policy gradients.Furthermore,MARL-LAC employs layered autonomy and collaboration to address multi-objective problems,specifically learning a global reward function for the swarm alongside local reward functions for individual defensive agents.Experimental results demonstrate that MARL-LAC enhances decision-making and collaborative behaviors among agents,outperforming the existing algorithms and emphasizing the importance of layered autonomy and collaboration in multi-agent systems.The observed adversarial behaviors demonstrate that agents using MARL-LAC effectively maintain cohesive formations that conceal their intentions by confusing the offensive agent while successfully encircling the target. 展开更多
关键词 Attack-defense confrontation Collaborative confrontation Autonomous agents Multi-agent systems reinforcement learning Maneuvering decisionmaking
原文传递
Dynamic Resource Allocation for Multi-Priority Requests Based on Deep Reinforcement Learning in Elastic Optical Network
15
作者 Zhou Yang Yang Xin +1 位作者 Sun Qiang Yang Zhuojia 《China Communications》 2026年第2期312-327,共16页
As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and adv... As the types of traffic requests increase,the elastic optical network(EON)is considered as a promising architecture to carry multiple types of traffic requests simultaneously,including immediate reservation(IR)and advance reservation(AR).Various resource allocation schemes for IR/AR requests have been designed in EON to reduce bandwidth blocking probability(BBP).However,these schemes do not consider different transmission requirements of IR requests and cannot maintain a low BBP for high-priority requests.In this paper,multi-priority is considered in the hybrid IR/AR request scenario.We modify the asynchronous advantage actor critic(A3C)model and propose an A3C-assisted priority resource allocation(APRA)algorithm.The APRA integrates priority and transmission quality of IR requests to design the A3C reward function,then dynamically allocates dedicated resources for different IR requests according to the time-varying requirements.By maximizing the reward,the transmission quality of IR requests can be matched with the priority,and lower BBP for high-priority IR requests can be ensured.Simulation results show that the APRA reduces the BBP of high-priority IR requests from 0.0341 to0.0138,and the overall network operation gain is improved by 883 compared to the scheme without considering the priority. 展开更多
关键词 deep reinforcement learning dynamic resource allocation elastic optical network multipriority requests
在线阅读 下载PDF
Enhanced multi-agent deep reinforcement learning for efficient task offloading and resource allocation in vehicular networks
16
作者 Long Xu Jiale Tan Hongcheng Zhuang 《Digital Communications and Networks》 2026年第1期66-75,共10页
In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging ... In response to the rising demand for low-latency,computation-intensive applications in vehicular networks,this paper proposes an adaptive task offloading approach for Vehicle-to-Everything(V2X)environments.Leveraging an enhanced Multi-Agent Deep Deterministic Policy Gradient(MADDPG)algorithm with an attention mechanism,the proposed approach optimizes computation offloading and resource allocation,aiming to minimize energy consumption and service delay.In this paper,vehicles dynamically offload computing-intensive tasks to both nearby vehicles through V2V links and roadside units through V2I links.The adaptive attention mechanism enables the system to prioritize relevant state information,leading to faster convergence.Simulations conducted in a realistic urban V2X scenario demonstrate that the proposed Attention-enhanced MADDPG(AT-MADDPG)algorithm significantly improves performance,achieving notable reductions in both energy consumption and latency compared to baseline algorithms,especially in high-demand,dynamic scenarios. 展开更多
关键词 Computation offloading Vehicular networks Deep reinforcement learning Adaptive offloading Spectrum and power allocation
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
17
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
Robust Voltage Control for Active Distribution Networks via Safe Deep Reinforcement Learning Against State Perturbations
18
作者 Meng Tian Xiaoxu Li +3 位作者 Ziyang Zhu Zhengcheng Dong Li Gong Jingang Lai 《Protection and Control of Modern Power Systems》 2026年第1期192-207,共16页
With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage... With the prevalence of renewable distributed energy resources(DERs)such as photovoltaics(PVs),modern active distribution networks(ADNs)suffer from voltage deviation and power quality issues.However,traditional voltage control methods often face a trade-off between efficiency and effectiveness,and rarely ensure robust voltage safety under typical state perturbations in practical distribution grids.In this paper,a robust model-free voltage regulation approach is proposed which simultaneously takes security and robustness into account.In this context,the voltage control problem is formulated as a constrained Markov decision process(CMDP).A safety-augmented multiagent deep deterministic policy gradient(MADDPG)algorithm is the trained to enable real-time collaborative optimization of ADNs,aiming to maintain nodal voltages within safe operational limits while minimizing total line losses.Moreover,a robust regulation loss is introduced to ensure reliable performance under various state perturbations in practical voltage controls.The proposed regulation algorithm effectively balance efficiency,safety,and robustness,and also demonstrates potential for generalizing these characteristics to other applications.Numerical studies vali-date the robustness of the proposed method under varying state perturbations on the IEEE test cases and the optimal integrated control performance when compared to other benchmarks. 展开更多
关键词 Active distribution network robust voltage control state perturbation model-free safe deep reinforcement learning
在线阅读 下载PDF
Beyond Wi-Fi 7:Enhanced Decentralized Wireless Local Area Networks with Federated Reinforcement Learning
19
作者 Rashid Ali Alaa Omran Almagrabi 《Computers, Materials & Continua》 2026年第3期391-409,共19页
Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning in... Wi-Fi technology has evolved significantly since its introduction in 1997,advancing to Wi-Fi 6 as the latest standard,with Wi-Fi 7 currently under development.Despite these advancements,integrating machine learning into Wi-Fi networks remains challenging,especially in decentralized environments with multiple access points(mAPs).This paper is a short review that summarizes the potential applications of federated reinforcement learning(FRL)across eight key areas of Wi-Fi functionality,including channel access,link adaptation,beamforming,multi-user transmissions,channel bonding,multi-link operation,spatial reuse,and multi-basic servic set(multi-BSS)coordination.FRL is highlighted as a promising framework for enabling decentralized training and decision-making while preserving data privacy.To illustrate its role in practice,we present a case study on link activation in a multi-link operation(MLO)environment with multiple APs.Through theoretical discussion and simulation results,the study demonstrates how FRL can improve performance and reliability,paving the way for more adaptive and collaborative Wi-Fi networks in the era of Wi-Fi 7 and beyond. 展开更多
关键词 Artificial intelligence reinforcement learning channels selection wireless local area networks 802.11ax 802.11be WI-FI
在线阅读 下载PDF
Safety-Aware Reinforcement Learning for Self-Healing Intrusion Detection in 5G-Enabled IoT Networks
20
作者 Wajdan Al Malwi Fatima Asiri +3 位作者 Nazik Alturki Noha Alnazzawi Dimitrios Kasimatis Nikolaos Pitropakis 《Computers, Materials & Continua》 2026年第5期2020-2042,共23页
The expansion of 5G-enabled Internet of Things(IoT)networks,while enabling transformative applications,significantly increases the attack surface and necessitates security solutions that extend beyond traditional intr... The expansion of 5G-enabled Internet of Things(IoT)networks,while enabling transformative applications,significantly increases the attack surface and necessitates security solutions that extend beyond traditional intrusion detection.Existing intrusion detection systems(IDSs)mainly operate in an open-loop manner,excelling at classification but lacking the ability for autonomous,safety-aware remediation.This gap is particularly critical in 5G environments,where manual intervention is too slow and naive automation can lead to severe service disruptions.To address this issue,we propose a novel Self-Healing Intrusion Detection System(SH-IDS)framework that develops a closed-loop cyber defense mechanism.The main technical contribution is the integration of a deep neural networkbased threat detector,which offers uncertainty-quantified predictions,with a safety-aware reinforcement learning(RL)engine formulated as a Constrained Markov Decision Process(CMDP).The CMDP explicitly models operational safety as cost constraints,and a new runtime safety shield actively adjusts any unsafe action proposed by the RL agent to the nearest safe alternative,ensuring operational integrity.Additionally,we introduce a composite utility function for the comprehensive evaluation of the system.Empirical analysis on the 5G-NIDD dataset demonstrates the superior performance of our framework:the detector achieves 98.26%accuracy,while the safe RL agent learns effective mitigation policies.Importantly,the safety shield blocked up to 70 unsafe actions under strict constraints,and analysis of the learned Q-tables confirms that the agent internalizes safety,avoiding overly disruptive actions,such as isolating nodes for minor threats.The system also maintains high efficiency with a compact model size of 121.7 KB and sub-millisecond latency,confirming its practical deployability for real-time 5G-IoT security. 展开更多
关键词 CYBERSECURITY internet of things intrusion detection 5G/6G security reinforcement learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部