期刊文献+
共找到8,947篇文章
< 1 2 250 >
每页显示 20 50 100
Adaptive layer splitting forwireless large language model inference in edge computing:amodel-based reinforcement learning approach
1
作者 Yuxuan CHEN Rongpeng LI +2 位作者 Xiaoxue YU Zhifeng ZHAO Honggang ZHANG 《Frontiers of Information Technology & Electronic Engineering》 2025年第2期278-292,共15页
Optimizing the deployment of large language models(LLMs)in edge computing environments is critical for enhancing privacy and computational efficiency.In the path toward efficient wireless LLM inference in edge computi... Optimizing the deployment of large language models(LLMs)in edge computing environments is critical for enhancing privacy and computational efficiency.In the path toward efficient wireless LLM inference in edge computing,this study comprehensively analyzes the impact of different splitting points in mainstream open-source LLMs.Accordingly,this study introduces a framework taking inspiration from model-based reinforcement learning to determine the optimal splitting point across the edge and user equipment.By incorporating a reward surrogate model,our approach significantly reduces the computational cost of frequent performance evaluations.Extensive simulations demonstrate that this method effectively balances inference performance and computational load under varying network conditions,providing a robust solution for LLM deployment in decentralized settings. 展开更多
关键词 Large language models(LLMs) Edge computing model-based reinforcement learning(MBRL) Split inference Transformer
原文传递
Offline model-based reinforcement learning with causal structured world models
2
作者 Zhengmao ZHU Honglong TIAN +2 位作者 Xionghui CHEN Kun ZHANG Yang YU 《Frontiers of Computer Science》 2025年第4期77-90,共14页
Model-based methods have recently been shown promising for offline reinforcement learning(RL),which aims at learning good policies from historical data without interacting with the environment.Previous model-based off... Model-based methods have recently been shown promising for offline reinforcement learning(RL),which aims at learning good policies from historical data without interacting with the environment.Previous model-based offline RL methods employ a straightforward prediction method that maps the states and actions directly to the next-step states.However,such a prediction method tends to capture spurious relations caused by the sampling policy preference behind the offline data.It is sensible that the environment model should focus on causal influences,which can facilitate learning an effective policy that can generalize well to unseen states.In this paper,we first provide theoretical results that causal environment models can outperform plain environment models in offline RL by incorporating the causal structure into the generalization error bound.We also propose a practical algorithm,oFfline mOdel-based reinforcement learning with CaUsal Structured World Models(FOCUS),to illustrate the feasibility of learning and leveraging causal structure in offline RL.Experimental results on two benchmarks show that FOCUS reconstructs the underlying causal structure accurately and robustly,and,as a result,outperforms both model-based offline RL algorithms and causal model-based offline RL algorithms. 展开更多
关键词 reinforcement learning offline reinforcement learning model-based reinforcement learning causal discovery
原文传递
Efficient and Stable Learning for Distribution Network Operation:A Model-based Reinforcement Learning Approach 被引量:1
3
作者 Dong Yan Zhan Shi +3 位作者 Xinying Wang Yiying Gao Tianjiao Pu Jiye Wang 《CSEE Journal of Power and Energy Systems》 2025年第3期1080-1092,共13页
This paper discusses the application of deep reinforcement learning(DRL)to the economic operation of power distribution networks,a complex system involving numerous flexible resources.Despite the improved control flex... This paper discusses the application of deep reinforcement learning(DRL)to the economic operation of power distribution networks,a complex system involving numerous flexible resources.Despite the improved control flexibility,traditional prediction-plus-optimization models struggle to adapt to rapidly shifting demands.Modern artificial intelligence(AI)methods,particularly DRL methods,promise faster decision-making but face challenges,including inefficient training and real-world application.This study introduces a reward evaluation system to assess the effectiveness of various strategies and proposes an enhanced algorithm based on the Model-based DRL approach.Incorporating a state transition model,the proposed algorithm augments data and enhances dynamic deduction,improving training efficiency.The effectiveness is demonstrated in various operational scenarios,showing notable enhancements in rationality and transfer generalization. 展开更多
关键词 Distribution networks economic operation reinforcement learning reward shaping transition model
原文传递
A model-based reinforcement learning framework for building heating management with branched rollout strategy and time-series prediction model
4
作者 Kaichen Qu Hong Zhang +2 位作者 Xin Zhou Martina Ferrando Francesco Causone 《Building Simulation》 2025年第7期1697-1716,共20页
Reinforcement learning(RL)has emerged as a promising approach for building energy management(BEM).However,most existing research focuses on model-free reinforcement learning(MFRL)approaches,which can encounter the lea... Reinforcement learning(RL)has emerged as a promising approach for building energy management(BEM).However,most existing research focuses on model-free reinforcement learning(MFRL)approaches,which can encounter the learning challenge for heating,ventilation and air conditioning(HVAC)control due to extensive trial-and-error explorations and lengthy training times.To address this challenge,we propose a model-based reinforcement learning(MBRL)framework that incorporates a virtual environment to augment the agent’s exploration.By leveraging the branched rollout strategy to generate short rollout predictions branched from the experience trajectory,the MBRL method mitigates compounding errors introduced by the time-series prediction model,enabling robust and efficient policy updates.Evaluated in an EnergyPlus testbed with real-world data verification,the proposed method demonstrates significant advantages:(1)RL-based controllers outperform the rule-based control(RBC)baseline after one training episode,(2)MBRL reduces training time by over 50%compared to MFRL while maintaining comparable control performance,and(3)an equal mix of real and synthetic data for MBRL training achieves an optimal trade-off between efficiency and control outcomes.This study contributes an efficient model-based training method for RL development in HVAC control,offering insights into advanced control strategies for BEM applications. 展开更多
关键词 building energy management reinforcement learning model-based learning recursive prediction time-series prediction model
原文传递
An Improved Reinforcement Learning-Based 6G UAV Communication for Smart Cities
5
作者 Vi Hoai Nam Chu Thi Minh Hue Dang Van Anh 《Computers, Materials & Continua》 2026年第1期2030-2044,共15页
Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic top... Unmanned Aerial Vehicles(UAVs)have become integral components in smart city infrastructures,supporting applications such as emergency response,surveillance,and data collection.However,the high mobility and dynamic topology of Flying Ad Hoc Networks(FANETs)present significant challenges for maintaining reliable,low-latency communication.Conventional geographic routing protocols often struggle in situations where link quality varies and mobility patterns are unpredictable.To overcome these limitations,this paper proposes an improved routing protocol based on reinforcement learning.This new approach integrates Q-learning with mechanisms that are both link-aware and mobility-aware.The proposed method optimizes the selection of relay nodes by using an adaptive reward function that takes into account energy consumption,delay,and link quality.Additionally,a Kalman filter is integrated to predict UAV mobility,improving the stability of communication links under dynamic network conditions.Simulation experiments were conducted using realistic scenarios,varying the number of UAVs to assess scalability.An analysis was conducted on key performance metrics,including the packet delivery ratio,end-to-end delay,and total energy consumption.The results demonstrate that the proposed approach significantly improves the packet delivery ratio by 12%–15%and reduces delay by up to 25.5%when compared to conventional GEO and QGEO protocols.However,this improvement comes at the cost of higher energy consumption due to additional computations and control overhead.Despite this trade-off,the proposed solution ensures reliable and efficient communication,making it well-suited for large-scale UAV networks operating in complex urban environments. 展开更多
关键词 UAV FANET smart cities reinforcement learning Q-learning
在线阅读 下载PDF
A Deep Reinforcement Learning-Based Partitioning Method for Power System Parallel Restoration
6
作者 Changcheng Li Weimeng Chang +1 位作者 Dahai Zhang Jinghan He 《Energy Engineering》 2026年第1期243-264,共22页
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision... Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training. 展开更多
关键词 Partitioning method parallel restoration deep reinforcement learning experience replay buffer partitioning modularity
在线阅读 下载PDF
Evaluation of Reinforcement Learning-Based Adaptive Modulation in Shallow Sea Acoustic Communication
7
作者 Yifan Qiu Xiaoyu Yang +1 位作者 Feng Tong Dongsheng Chen 《哈尔滨工程大学学报(英文版)》 2026年第1期292-299,共8页
While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance re... While reinforcement learning-based underwater acoustic adaptive modulation shows promise for enabling environment-adaptive communication as supported by extensive simulation-based research,its practical performance remains underexplored in field investigations.To evaluate the practical applicability of this emerging technique in adverse shallow sea channels,a field experiment was conducted using three communication modes:orthogonal frequency division multiplexing(OFDM),M-ary frequency-shift keying(MFSK),and direct sequence spread spectrum(DSSS)for reinforcement learning-driven adaptive modulation.Specifically,a Q-learning method is used to select the optimal modulation mode according to the channel quality quantified by signal-to-noise ratio,multipath spread length,and Doppler frequency offset.Experimental results demonstrate that the reinforcement learning-based adaptive modulation scheme outperformed fixed threshold detection in terms of total throughput and average bit error rate,surpassing conventional adaptive modulation strategies. 展开更多
关键词 Adaptive modulation Shallow sea underwater acoustic modulation reinforcement learning
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
8
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
Energy Optimization for Autonomous Mobile Robot Path Planning Based on Deep Reinforcement Learning
9
作者 Longfei Gao Weidong Wang Dieyun Ke 《Computers, Materials & Continua》 2026年第1期984-998,共15页
At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown ... At present,energy consumption is one of the main bottlenecks in autonomous mobile robot development.To address the challenge of high energy consumption in path planning for autonomous mobile robots navigating unknown and complex environments,this paper proposes an Attention-Enhanced Dueling Deep Q-Network(ADDueling DQN),which integrates a multi-head attention mechanism and a prioritized experience replay strategy into a Dueling-DQN reinforcement learning framework.A multi-objective reward function,centered on energy efficiency,is designed to comprehensively consider path length,terrain slope,motion smoothness,and obstacle avoidance,enabling optimal low-energy trajectory generation in 3D space from the source.The incorporation of a multihead attention mechanism allows the model to dynamically focus on energy-critical state features—such as slope gradients and obstacle density—thereby significantly improving its ability to recognize and avoid energy-intensive paths.Additionally,the prioritized experience replay mechanism accelerates learning from key decision-making experiences,suppressing inefficient exploration and guiding the policy toward low-energy solutions more rapidly.The effectiveness of the proposed path planning algorithm is validated through simulation experiments conducted in multiple off-road scenarios.Results demonstrate that AD-Dueling DQN consistently achieves the lowest average energy consumption across all tested environments.Moreover,the proposed method exhibits faster convergence and greater training stability compared to baseline algorithms,highlighting its global optimization capability under energy-aware objectives in complex terrains.This study offers an efficient and scalable intelligent control strategy for the development of energy-conscious autonomous navigation systems. 展开更多
关键词 Autonomous mobile robot deep reinforcement learning energy optimization multi-attention mechanism prioritized experience replay dueling deep Q-Network
在线阅读 下载PDF
Model gradient: unified model and policy learning in model-based reinforcement learning
10
作者 Chengxing JIA Fuxiang ZHANG +3 位作者 Tian XU Jing-Cheng PANG Zongzhang ZHANG Yang YU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期117-128,共12页
Model-based reinforcement learning is a promising direction to improve the sample efficiency of reinforcement learning with learning a model of the environment.Previous model learning methods aim at fitting the transi... Model-based reinforcement learning is a promising direction to improve the sample efficiency of reinforcement learning with learning a model of the environment.Previous model learning methods aim at fitting the transition data,and commonly employ a supervised learning approach to minimize the distance between the predicted state and the real state.The supervised model learning methods,however,diverge from the ultimate goal of model learning,i.e.,optimizing the learned-in-the-model policy.In this work,we investigate how model learning and policy learning can share the same objective of maximizing the expected return in the real environment.We find model learning towards this objective can result in a target of enhancing the similarity between the gradient on generated data and the gradient on the real data.We thus derive the gradient of the model from this target and propose the Model Gradient algorithm(MG)to integrate this novel model learning approach with policy-gradient-based policy optimization.We conduct experiments on multiple locomotion control tasks and find that MG can not only achieve high sample efficiency but also lead to better convergence performance compared to traditional model-based reinforcement learning approaches. 展开更多
关键词 reinforcement learning model-based reinforcement learning Markov decision process
原文传递
Model-based reinforcement learning for router port queue configurations
11
作者 Ajay Kattepur Sushanth David Swarup Kumar Mohalik 《Intelligent and Converged Networks》 2021年第3期177-197,共21页
Fifth-generation(5G)systems have brought about new challenges toward ensuring Quality of Service(QoS)in differentiated services.This includes low latency applications,scalable machine-to-machine communication,and enha... Fifth-generation(5G)systems have brought about new challenges toward ensuring Quality of Service(QoS)in differentiated services.This includes low latency applications,scalable machine-to-machine communication,and enhanced mobile broadband connectivity.In order to satisfy these requirements,the concept of network slicing has been introduced to generate slices of the network with specific characteristics.In order to meet the requirements of network slices,routers and switches must be effectively configured to provide priority queue provisioning,resource contention management and adaptation.Configuring routers from vendors,such as Ericsson,Cisco,and Juniper,have traditionally been an expert-driven process with static rules for individual flows,which are prone to sub optimal configurations with varying traffic conditions.In this paper,we model the internal ingress and egress queues within routers via a queuing model.The effects of changing queue configuration with respect to priority,weights,flow limits,and packet drops are studied in detail.This is used to train a model-based Reinforcement Learning(RL)algorithm to generate optimal policies for flow prioritization,fairness,and congestion control.The efficacy of the RL policy output is demonstrated over scenarios involving ingress queue traffic policing,egress queue traffic shaping,and one-hop router coordinated traffic conditioning.This is evaluated over a real application use case,wherein a statically configured router proved sub optimal toward desired QoS requirements.Such automated configuration of routers and switches will be critical for multiple 5G deployments with varying flow requirements and traffic patterns. 展开更多
关键词 router port queues model-based reinforcement learning(RL) network slicing
原文传递
Rule-Guidance Reinforcement Learning for Lane Change Decision-making:A Risk Assessment Approach 被引量:1
12
作者 Lu Xiong Zhuoren Li +2 位作者 Danyang Zhong Puhang Xu Chen Tang 《Chinese Journal of Mechanical Engineering》 2025年第2期344-359,共16页
To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforce... To solve problems of poor security guarantee and insufficient training efficiency in the conventional reinforcement learning methods for decision-making,this study proposes a hybrid framework to combine deep reinforcement learning with rule-based decision-making methods.A risk assessment model for lane-change maneuvers considering uncertain predictions of surrounding vehicles is established as a safety filter to improve learning efficiency while correcting dangerous actions for safety enhancement.On this basis,a Risk-fused DDQN is constructed utilizing the model-based risk assessment and supervision mechanism.The proposed reinforcement learning algorithm sets up a separate experience buffer for dangerous trials and punishes such actions,which is shown to improve the sampling efficiency and training outcomes.Compared with conventional DDQN methods,the proposed algorithm improves the convergence value of cumulated reward by 7.6%and 2.2%in the two constructed scenarios in the simulation study and reduces the number of training episodes by 52.2%and 66.8%respectively.The success rate of lane change is improved by 57.3%while the time headway is increased at least by 16.5%in real vehicle tests,which confirms the higher training efficiency,scenario adaptability,and security of the proposed Risk-fused DDQN. 展开更多
关键词 Autonomous driving reinforcement learning DECISION-MAKING Risk assessment Safety filter
在线阅读 下载PDF
A Survey of Cooperative Multi-agent Reinforcement Learning for Multi-task Scenarios 被引量:1
13
作者 Jiajun CHAI Zijie ZHAO +1 位作者 Yuanheng ZHU Dongbin ZHAO 《Artificial Intelligence Science and Engineering》 2025年第2期98-121,共24页
Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-... Cooperative multi-agent reinforcement learning(MARL)is a key technology for enabling cooperation in complex multi-agent systems.It has achieved remarkable progress in areas such as gaming,autonomous driving,and multi-robot control.Empowering cooperative MARL with multi-task decision-making capabilities is expected to further broaden its application scope.In multi-task scenarios,cooperative MARL algorithms need to address 3 types of multi-task problems:reward-related multi-task,arising from different reward functions;multi-domain multi-task,caused by differences in state and action spaces,state transition functions;and scalability-related multi-task,resulting from the dynamic variation in the number of agents.Most existing studies focus on scalability-related multitask problems.However,with the increasing integration between large language models(LLMs)and multi-agent systems,a growing number of LLM-based multi-agent systems have emerged,enabling more complex multi-task cooperation.This paper provides a comprehensive review of the latest advances in this field.By combining multi-task reinforcement learning with cooperative MARL,we categorize and analyze the 3 major types of multi-task problems under multi-agent settings,offering more fine-grained classifications and summarizing key insights for each.In addition,we summarize commonly used benchmarks and discuss future directions of research in this area,which hold promise for further enhancing the multi-task cooperation capabilities of multi-agent systems and expanding their practical applications in the real world. 展开更多
关键词 MULTI-TASK multi-agent reinforcement learning large language models
在线阅读 下载PDF
Graph-based multi-agent reinforcement learning for collaborative search and tracking of multiple UAVs 被引量:2
14
作者 Bocheng ZHAO Mingying HUO +4 位作者 Zheng LI Wenyu FENG Ze YU Naiming QI Shaohai WANG 《Chinese Journal of Aeronautics》 2025年第3期109-123,共15页
This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary obj... This paper investigates the challenges associated with Unmanned Aerial Vehicle (UAV) collaborative search and target tracking in dynamic and unknown environments characterized by limited field of view. The primary objective is to explore the unknown environments to locate and track targets effectively. To address this problem, we propose a novel Multi-Agent Reinforcement Learning (MARL) method based on Graph Neural Network (GNN). Firstly, a method is introduced for encoding continuous-space multi-UAV problem data into spatial graphs which establish essential relationships among agents, obstacles, and targets. Secondly, a Graph AttenTion network (GAT) model is presented, which focuses exclusively on adjacent nodes, learns attention weights adaptively and allows agents to better process information in dynamic environments. Reward functions are specifically designed to tackle exploration challenges in environments with sparse rewards. By introducing a framework that integrates centralized training and distributed execution, the advancement of models is facilitated. Simulation results show that the proposed method outperforms the existing MARL method in search rate and tracking performance with less collisions. The experiments show that the proposed method can be extended to applications with a larger number of agents, which provides a potential solution to the challenging problem of multi-UAV autonomous tracking in dynamic unknown environments. 展开更多
关键词 Unmanned aerial vehicle(UAV) Multi-agent reinforcement learning(MARL) Graph attention network(GAT) Tracking Dynamic and unknown environment
原文传递
Deep reinforcement learning based integrated evasion and impact hierarchical intelligent policy of exo-atmospheric vehicles 被引量:1
15
作者 Leliang REN Weilin GUO +3 位作者 Yong XIAN Zhenyu LIU Daqiao ZHANG Shaopeng LI 《Chinese Journal of Aeronautics》 2025年第1期409-426,共18页
Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision u... Exo-atmospheric vehicles are constrained by limited maneuverability,which leads to the contradiction between evasive maneuver and precision strike.To address the problem of Integrated Evasion and Impact(IEI)decision under multi-constraint conditions,a hierarchical intelligent decision-making method based on Deep Reinforcement Learning(DRL)was proposed.First,an intelligent decision-making framework of“DRL evasion decision”+“impact prediction guidance decision”was established:it takes the impact point deviation correction ability as the constraint and the maximum miss distance as the objective,and effectively solves the problem of poor decisionmaking effect caused by the large IEI decision space.Second,to solve the sparse reward problem faced by evasion decision-making,a hierarchical decision-making method consisting of maneuver timing decision and maneuver duration decision was proposed,and the corresponding Markov Decision Process(MDP)was designed.A detailed simulation experiment was designed to analyze the advantages and computational complexity of the proposed method.Simulation results show that the proposed model has good performance and low computational resource requirement.The minimum miss distance is 21.3 m under the condition of guaranteeing the impact point accuracy,and the single decision-making time is 4.086 ms on an STM32F407 single-chip microcomputer,which has engineering application value. 展开更多
关键词 Exo-atmospheric vehicle Integrated evasion and impact Deep reinforcement learning Hierarchical intelligent policy Single-chip microcomputer Miss distance
原文传递
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
16
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Intelligent path planning for small modular reactors based on improved reinforcement learning
17
作者 DONG Yun-Feng ZHOU Wei-Zheng +1 位作者 WANG Zhe-Zheng ZHANG Xiao 《四川大学学报(自然科学版)》 北大核心 2025年第4期1006-1014,共9页
Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous... Small modular reactor(SMR)belongs to the research forefront of nuclear reactor technology.Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control process of SMR can be divided into three stages,say,state diagnosis,autonomous decision-making and coordinated control.In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network(ANN)technology,which constructs a basis for the state judgment of intelligent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the full range control path intelligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future. 展开更多
关键词 Small modular reactor Operating condition recognition Path planning reinforcement learning
在线阅读 下载PDF
Performance comparison of deep reinforcement robot-arm learning in sequential fabrication of rule-based building design form
18
作者 Abhishek Mehrotra Hwang Yi 《Frontiers of Architectural Research》 2025年第6期1654-1680,共27页
Deep reinforcement learning(DRL)remains underexplored within architectural robotics,particularly in relation to self-learning of architectural design principles and designaware robotic fabrication.To address this gap,... Deep reinforcement learning(DRL)remains underexplored within architectural robotics,particularly in relation to self-learning of architectural design principles and designaware robotic fabrication.To address this gap,we applied established DRL methods to enable robot arms to autonomously learn design rules in a pilot block wall assembly-design scenario.Recognizing the complexity inherent in such learning tasks,the problem was strategically decomposed into two sub-tasks:(i)target reaching(T1),modeled within a continuous action space,and(ii)sequential planning(T2),formulated within a discrete action space.For T1,we evaluated major DRL algorithms―Proximal Policy Optimization(PPO),Advantage Actor-Critic(A2C),Deep Deterministic Policy Gradient,Twin Delayed Deep Deterministic Policy Gradient,and Soft Actor-Critic(SAC),and PPO,A2C,and Double Deep Q-Network(DDQN)were tested for T2.Performance was assessed based on training efficacy,reliability,and two novel metrics:degree index and variation index.Our results revealed that SAC was the best for T1,whereas DDQN excelled in T2.Notably,DDQN exhibited strong learning adaptability,yielding diverse final layouts in response to varying initial conditions. 展开更多
关键词 Robotic architecture Robot learning reinforcement learning Robotic construction Robot arm
原文传递
Reinforcement Learning in Mechatronic Systems: A Case Study on DC Motor Control
19
作者 Alexander Nüßgen Alexander Lerch +5 位作者 René Degen Marcus Irmer Martin de Fries Fabian Richter Cecilia Boström Margot Ruschitzka 《Circuits and Systems》 2025年第1期1-24,共24页
The integration of artificial intelligence into the development and production of mechatronic products offers a substantial opportunity to enhance efficiency, adaptability, and system performance. This paper examines ... The integration of artificial intelligence into the development and production of mechatronic products offers a substantial opportunity to enhance efficiency, adaptability, and system performance. This paper examines the utilization of reinforcement learning as a control strategy, with a particular focus on its deployment in pivotal stages of the product development lifecycle, specifically between system architecture and system integration and verification. A controller based on reinforcement learning was developed and evaluated in comparison to traditional proportional-integral controllers in dynamic and fault-prone environments. The results illustrate the superior adaptability, stability, and optimization potential of the reinforcement learning approach, particularly in addressing dynamic disturbances and ensuring robust performance. The study illustrates how reinforcement learning can facilitate the transition from conceptual design to implementation by automating optimization processes, enabling interface automation, and enhancing system-level testing. Based on the aforementioned findings, this paper presents future directions for research, which include the integration of domain-specific knowledge into the reinforcement learning process and the validation of this process in real-world environments. The results underscore the potential of artificial intelligence-driven methodologies to revolutionize the design and deployment of intelligent mechatronic systems. 展开更多
关键词 Artificial Intelligence in Product Development Mechatronic Systems reinforcement learning for Control System Integration and Verification Adaptive Optimization Processes Knowledge-Based Engineering
在线阅读 下载PDF
A Survey on Reinforcement Learning for Optimal Decision-Making and Control of Intelligent Vehicles
20
作者 Yixing Lan Xin Xu +3 位作者 Jiahang Liu Xinglong Zhang Yang Lu Long Cheng 《CAAI Transactions on Intelligence Technology》 2025年第6期1593-1615,共23页
Reinforcement learning(RL)has been widely studied as an efficient class of machine learning methods for adaptive optimal control under uncertainties.In recent years,the applications of RL in optimised decision-making ... Reinforcement learning(RL)has been widely studied as an efficient class of machine learning methods for adaptive optimal control under uncertainties.In recent years,the applications of RL in optimised decision-making and motion control of intelligent vehicles have received increasing attention.Due to the complex and dynamic operating environments of intelligent vehicles,it is necessary to improve the learning efficiency and generalisation ability of RL-based decision and control algorithms under different conditions.This survey systematically examines the theoretical foundations,algorithmic advancements and practical challenges of applying RL to intelligent vehicle systems operating in complex and dynamic environments.The major algorithm frameworks of RL are first introduced,and the recent advances in RL-based decision-making and control of intelligent vehicles are overviewed.In addition to self-learning decision and control approaches using state measurements,the developments of DRL methods for end-to-end driving control of intelligent vehicles are summarised.The open problems and directions for further research works are also discussed. 展开更多
关键词 adaptive dynamic programming intelligent vehicles learning control optimal decision-making reinforcement learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部