Trajectory tracking for nonlinear robotic systems remains a fundamental yet challenging problem in control engineering,particularly when both precision and efficiency must be ensured.Conventional control methods are o...Trajectory tracking for nonlinear robotic systems remains a fundamental yet challenging problem in control engineering,particularly when both precision and efficiency must be ensured.Conventional control methods are often effective for stabilization but may not directly optimize long-term performance.To address this limitation,this study develops an integrated framework that combines optimal control principles with reinforcement learning for a single-link robotic manipulator.The proposed scheme adopts an actor–critic structure,where the critic network approximates the value function associated with the Hamilton–Jacobi–Bellman equation,and the actor network generates near-optimal control signals in real time.This dual adaptation enables the controller to refine its policy online without explicit system knowledge.Stability of the closed-loop system is analyzed through Lyapunov theory,ensuring boundedness of the tracking error.Numerical simulations on the single-link manipulator demonstrate that themethod achieves accurate trajectory followingwhile maintaining lowcontrol effort.The results further showthat the actor–critic learning mechanism accelerates convergence of the control policy compared with conventional optimization-based strategies.This work highlights the potential of reinforcement learning integrated with optimal control for robotic manipulators and provides a foundation for future extensions to more complex multi-degree-of-freedom systems.The proposed controller is further validated in a physics-based virtual Gazebo environment,demonstrating stable adaptation and real-time feasibility.展开更多
This paper describes a real-time beam tuning method with an improved asynchronous advantage actor–critic(A3C)algorithm for accelerator systems.The operating parameters of devices are usually inconsistent with the pre...This paper describes a real-time beam tuning method with an improved asynchronous advantage actor–critic(A3C)algorithm for accelerator systems.The operating parameters of devices are usually inconsistent with the predictions of physical designs because of errors in mechanical matching and installation.Therefore,parameter optimization methods such as pointwise scanning,evolutionary algorithms(EAs),and robust conjugate direction search are widely used in beam tuning to compensate for this inconsistency.However,it is difficult for them to deal with a large number of discrete local optima.The A3C algorithm,which has been applied in the automated control field,provides an approach for improving multi-dimensional optimization.The A3C algorithm is introduced and improved for the real-time beam tuning code for accelerators.Experiments in which optimization is achieved by using pointwise scanning,the genetic algorithm(one kind of EAs),and the A3C-algorithm are conducted and compared to optimize the currents of four steering magnets and two solenoids in the low-energy beam transport section(LEBT)of the Xi’an Proton Application Facility.Optimal currents are determined when the highest transmission of a radio frequency quadrupole(RFQ)accelerator downstream of the LEBT is achieved.The optimal work points of the tuned accelerator were obtained with currents of 0 A,0 A,0 A,and 0.1 A,for the four steering magnets,and 107 A and 96 A for the two solenoids.Furthermore,the highest transmission of the RFQ was 91.2%.Meanwhile,the lower time required for the optimization with the A3C algorithm was successfully verified.Optimization with the A3C algorithm consumed 42%and 78%less time than pointwise scanning with random initialization and pre-trained initialization of weights,respectively.展开更多
A new static task scheduling algorithm named edge-zeroing based on dynamic critical paths is proposed. The main ideas of the algorithm are as follows: firstly suppose that all of the tasks are in different clusters; s...A new static task scheduling algorithm named edge-zeroing based on dynamic critical paths is proposed. The main ideas of the algorithm are as follows: firstly suppose that all of the tasks are in different clusters; secondly, select one of the critical paths of the partially clustered directed acyclic graph; thirdly, try to zero one of graph communication edges; fourthly, repeat above three processes until all edges are zeroed; finally, check the generated clusters to see if some of them can be further merged without increasing the parallel time. Comparisons of the previous algorithms with edge-zeroing based on dynamic critical paths show that the new algorithm has not only a low complexity but also a desired performance comparable or even better on average to much higher complexity heuristic algorithms.展开更多
基金supported in part by the National Science and Technology Council under Grant NSTC 114-2221-E-027-104.
文摘Trajectory tracking for nonlinear robotic systems remains a fundamental yet challenging problem in control engineering,particularly when both precision and efficiency must be ensured.Conventional control methods are often effective for stabilization but may not directly optimize long-term performance.To address this limitation,this study develops an integrated framework that combines optimal control principles with reinforcement learning for a single-link robotic manipulator.The proposed scheme adopts an actor–critic structure,where the critic network approximates the value function associated with the Hamilton–Jacobi–Bellman equation,and the actor network generates near-optimal control signals in real time.This dual adaptation enables the controller to refine its policy online without explicit system knowledge.Stability of the closed-loop system is analyzed through Lyapunov theory,ensuring boundedness of the tracking error.Numerical simulations on the single-link manipulator demonstrate that themethod achieves accurate trajectory followingwhile maintaining lowcontrol effort.The results further showthat the actor–critic learning mechanism accelerates convergence of the control policy compared with conventional optimization-based strategies.This work highlights the potential of reinforcement learning integrated with optimal control for robotic manipulators and provides a foundation for future extensions to more complex multi-degree-of-freedom systems.The proposed controller is further validated in a physics-based virtual Gazebo environment,demonstrating stable adaptation and real-time feasibility.
文摘This paper describes a real-time beam tuning method with an improved asynchronous advantage actor–critic(A3C)algorithm for accelerator systems.The operating parameters of devices are usually inconsistent with the predictions of physical designs because of errors in mechanical matching and installation.Therefore,parameter optimization methods such as pointwise scanning,evolutionary algorithms(EAs),and robust conjugate direction search are widely used in beam tuning to compensate for this inconsistency.However,it is difficult for them to deal with a large number of discrete local optima.The A3C algorithm,which has been applied in the automated control field,provides an approach for improving multi-dimensional optimization.The A3C algorithm is introduced and improved for the real-time beam tuning code for accelerators.Experiments in which optimization is achieved by using pointwise scanning,the genetic algorithm(one kind of EAs),and the A3C-algorithm are conducted and compared to optimize the currents of four steering magnets and two solenoids in the low-energy beam transport section(LEBT)of the Xi’an Proton Application Facility.Optimal currents are determined when the highest transmission of a radio frequency quadrupole(RFQ)accelerator downstream of the LEBT is achieved.The optimal work points of the tuned accelerator were obtained with currents of 0 A,0 A,0 A,and 0.1 A,for the four steering magnets,and 107 A and 96 A for the two solenoids.Furthermore,the highest transmission of the RFQ was 91.2%.Meanwhile,the lower time required for the optimization with the A3C algorithm was successfully verified.Optimization with the A3C algorithm consumed 42%and 78%less time than pointwise scanning with random initialization and pre-trained initialization of weights,respectively.
文摘A new static task scheduling algorithm named edge-zeroing based on dynamic critical paths is proposed. The main ideas of the algorithm are as follows: firstly suppose that all of the tasks are in different clusters; secondly, select one of the critical paths of the partially clustered directed acyclic graph; thirdly, try to zero one of graph communication edges; fourthly, repeat above three processes until all edges are zeroed; finally, check the generated clusters to see if some of them can be further merged without increasing the parallel time. Comparisons of the previous algorithms with edge-zeroing based on dynamic critical paths show that the new algorithm has not only a low complexity but also a desired performance comparable or even better on average to much higher complexity heuristic algorithms.