In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others...In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others'system parameters or control laws.Each player adopts an on-policy value iteration algorithm as the basic learning framework.To deal with the incomplete information structure,players collect a period of system trajectory data to compensate for the lack of information.The policy updating step is implemented by a nonlinear optimization problem aiming to search for the proximal admissible policy.Theoretical analysis shows that by adopting proximal policy searching rules,the approximated policies can converge to a neighborhood of equilibrium policies.The efficacy of our method is illustrated by three examples,which also demonstrate that the proposed method can accelerate the learning process compared with the centralized learning framework.展开更多
Learning-based methods have become mainstream for solving residential energy scheduling problems. In order to improve the learning efficiency of existing methods and increase the utilization of renewable energy, we pr...Learning-based methods have become mainstream for solving residential energy scheduling problems. In order to improve the learning efficiency of existing methods and increase the utilization of renewable energy, we propose the Dyna actiondependent heuristic dynamic programming(Dyna-ADHDP)method, which incorporates the ideas of learning and planning from the Dyna framework in action-dependent heuristic dynamic programming. This method defines a continuous action space for precise control of an energy storage system and allows online optimization of algorithm performance during the real-time operation of the residential energy model. Meanwhile, the target network is introduced during the training process to make the training smoother and more efficient. We conducted experimental comparisons with the benchmark method using simulated and real data to verify its applicability and performance. The results confirm the method's excellent performance and generalization capabilities, as well as its excellence in increasing renewable energy utilization and extending equipment life.展开更多
In many research disciplines, hypothesis tests are applied to evaluate whether findings are statistically significant or could be explained by chance. The Wilcoxon-Mann-Whitney (WMW) test is among the most popular h...In many research disciplines, hypothesis tests are applied to evaluate whether findings are statistically significant or could be explained by chance. The Wilcoxon-Mann-Whitney (WMW) test is among the most popular hypothesis tests in medicine and life science to analyze if two groups of samples are equally distributed. This nonparametric statistical homogeneity test is commonly applied in molecular diagnosis. Generally, the solution of the WMW test takes a high combinatorial effort for large sample cohorts containing a significant number of ties. Hence, P value is frequently approximated by a normal distribution. We developed EDISON-WMW, a new approach to calcu- late the exact permutation of the two-tailed unpaired WMW test without any corrections required and allowing for ties. The method relies on dynamic programing to solve the combinatorial problem of the WMW test efficiently. Beyond a straightforward implementation of the algorithm, we pre- sented different optimization strategies and developed a parallel solution. Using our program, the exact P value for large cohorts containing more than 1000 samples with ties can be calculated within minutes. We demonstrate the performance of this novel approach on randomly-generated data, benchmark it against 13 other commonly-applied approaches and moreover evaluate molec- ular biomarkers for lung carcinoma and chronic obstructive pulmonary disease (COPD). We foundthat approximated P values were generally higher than the exact solution provided by EDISON- WMW. Importantly, the algorithm can also be applied to high-throughput omics datasets, where hundreds or thousands of features are included. To provide easy access to the multi-threaded version of EDISON-WMW, a web-based solution of our algorithm is freely available at http:// www.ccb.uni-saarland.de/software/wtest/.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
In this paper,a novel adaptive Fault-Tolerant Control(FTC)strategy is proposed for non-minimum phase Hypersonic Vehicles(HSVs)that are affected by actuator faults and parameter uncertainties.The strategy is based on t...In this paper,a novel adaptive Fault-Tolerant Control(FTC)strategy is proposed for non-minimum phase Hypersonic Vehicles(HSVs)that are affected by actuator faults and parameter uncertainties.The strategy is based on the output redefinition method and Adaptive Dynamic Programming(ADP).The intelligent FTC scheme consists of two main parts:a basic fault-tolerant and stable controller and an ADP-based supplementary controller.In the basic FTC part,an output redefinition approach is designed to make zero-dynamics stable with respect to the new output.Then,Ideal Internal Dynamic(IID)is obtained using an optimal bounded inversion approach,and a tracking controller is designed for the new output to realize output tracking of the nonminimum phase HSV system.For the ADP-based compensation control part,an ActionDependent Heuristic Dynamic Programming(ADHDP)adopting an actor-critic learning structure is utilized to further optimize the tracking performance of the HSV control system.Finally,simulation results are provided to verify the effectiveness and efficiency of the proposed FTC algorithm.展开更多
In order to address the output feedback issue for linear discrete-time systems, this work suggests a brand-new adaptive dynamic programming(ADP) technique based on the internal model principle(IMP). The proposed metho...In order to address the output feedback issue for linear discrete-time systems, this work suggests a brand-new adaptive dynamic programming(ADP) technique based on the internal model principle(IMP). The proposed method, termed as IMP-ADP, does not require complete state feedback-merely the measurement of input and output data. More specifically, based on the IMP, the output control problem can first be converted into a stabilization problem. We then design an observer to reproduce the full state of the system by measuring the inputs and outputs. Moreover, this technique includes both a policy iteration algorithm and a value iteration algorithm to determine the optimal feedback gain without using a dynamic system model. It is important that with this concept one does not need to solve the regulator equation. Finally, this control method was tested on an inverter system of grid-connected LCLs to demonstrate that the proposed method provides the desired performance in terms of both tracking and disturbance rejection.展开更多
The use of dynamic programming(DP)algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large-scale networks.Therefore,this study propose...The use of dynamic programming(DP)algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large-scale networks.Therefore,this study proposes a DP algorithm based on node block sequence constraints.The proposed algorithm constrains the traversal process of the parent graph by using the M-sequence matrix to considerably reduce the time consumption and space complexity by pruning the traversal process of the order graph using the node block sequence.Experimental results show that compared with existing DP algorithms,the proposed algorithm can obtain learning results more efficiently with less than 1%loss of accuracy,and can be used for learning larger-scale networks.展开更多
This paper presents an optimized shared control algorithm for human–AI interaction, implemented through a digital twin framework where the physical system and human operator act as the real agent while an AI-driven d...This paper presents an optimized shared control algorithm for human–AI interaction, implemented through a digital twin framework where the physical system and human operator act as the real agent while an AI-driven digital system functions as the virtual agent. In this digital twin architecture, the real agent acquires an optimal control strategy through observed actions, while the AI virtual agent mirrors the real agent to establish a digital replica system and corresponding control policy. Both the real and virtual optimal controllers are approximated using reinforcement learning(RL) techniques. Specifically, critic neural networks(NNs) are employed to learn the virtual and real optimal value functions, while actor NNs are trained to derive their respective optimal controllers. A novel shared mechanism is introduced to integrate both virtual and real value functions into a unified learning framework, yielding an optimal shared controller. This controller adaptively adjusts the confidence ratio between virtual and real agents, enhancing the system's efficiency and flexibility in handling complex control tasks. The stability of the closed-loop system is rigorously analyzed using the Lyapunov method. The effectiveness of the proposed AI–human interactive system is validated through two numerical examples: a representative nonlinear system and an unmanned aerial vehicle(UAV) control system.展开更多
A mixed adaptive dynamic programming(ADP)scheme based on zero-sum game theory is developed to address optimal control problems of autonomous underwater vehicle(AUV)systems subject to disturbances and safe constraints....A mixed adaptive dynamic programming(ADP)scheme based on zero-sum game theory is developed to address optimal control problems of autonomous underwater vehicle(AUV)systems subject to disturbances and safe constraints.By combining prior dynamic knowledge and actual sampled data,the proposed approach effectively mitigates the defect caused by the inaccurate dynamic model and significantly improves the training speed of the ADP algorithm.Initially,the dataset is enriched with sufficient reference data collected based on a nominal model without considering modelling bias.Also,the control object interacts with the real environment and continuously gathers adequate sampled data in the dataset.To comprehensively leverage the advantages of model-based and model-free methods during training,an adaptive tuning factor is introduced based on the dataset that possesses model-referenced information and conforms to the distribution of the real-world environment,which balances the influence of model-based control law and data-driven policy gradient on the direction of policy improvement.As a result,the proposed approach accelerates the learning speed compared to data-driven methods,concurrently also enhancing the tracking performance in comparison to model-based control methods.Moreover,the optimal control problem under disturbances is formulated as a zero-sum game,and the actor-critic-disturbance framework is introduced to approximate the optimal control input,cost function,and disturbance policy,respectively.Furthermore,the convergence property of the proposed algorithm based on the value iteration method is analysed.Finally,an example of AUV path following based on the improved line-of-sight guidance is presented to demonstrate the effectiveness of the proposed method.展开更多
Integral reinforcement learning(IRL)is an effective tool for solving optimal control problems of nonlinear systems,and it has been widely utilized in optimal controller design for solving discrete-time nonlinearity.Ho...Integral reinforcement learning(IRL)is an effective tool for solving optimal control problems of nonlinear systems,and it has been widely utilized in optimal controller design for solving discrete-time nonlinearity.However,solving the Hamilton-Jacobi-Bellman(HJB)equations for nonlinear systems requires precise and complicated dynamics.Moreover,the research and application of IRL in continuous-time(CT)systems must be further improved.To develop the IRL of a CT nonlinear system,a data-based adaptive neural dynamic programming(ANDP)method is proposed to investigate the optimal control problem of uncertain CT multi-input systems such that the knowledge of the dynamics in the HJB equation is unnecessary.First,the multi-input model is approximated using a neural network(NN),which can be utilized to design an integral reinforcement signal.Subsequently,two criterion networks and one action network are constructed based on the integral reinforcement signal.A nonzero-sum Nash equilibrium can be reached by learning the optimal strategies of the multi-input model.In this scheme,the NN weights are constantly updated using an adaptive algorithm.The weight convergence and the system stability are analyzed in detail.The optimal control problem of a multi-input nonlinear CT system is effectively solved using the ANDP scheme,and the results are verified by a simulation study.展开更多
The paper develops a robust control approach for nonaffine nonlinear continuous systems with input constraints and unknown uncertainties. Firstly, this paper constructs an affine augmented system(AAS) within a pre-com...The paper develops a robust control approach for nonaffine nonlinear continuous systems with input constraints and unknown uncertainties. Firstly, this paper constructs an affine augmented system(AAS) within a pre-compensation technique for converting the original nonaffine dynamics into affine dynamics. Secondly, the paper derives a stability criterion linking the original nonaffine system and the auxiliary system, demonstrating that the obtained optimal policies from the auxiliary system can achieve the robust controller of the nonaffine system. Thirdly, an online adaptive dynamic programming(ADP) algorithm is designed for approximating the optimal solution of the Hamilton–Jacobi–Bellman(HJB) equation.Moreover, the gradient descent approach and projection approach are employed for updating the actor-critic neural network(NN) weights, with the algorithm's convergence being proven. Then, the uniformly ultimately bounded stability of state is guaranteed. Finally, in simulation, some examples are offered for validating the effectiveness of this presented approach.展开更多
This paper studies motor joint control of a 4-degree-of-freedom(DoF)robotic manipulator using learning-based Adaptive Dynamic Programming(ADP)approach.The manipulator’s dynamics are modelled as an open-loop 4-link se...This paper studies motor joint control of a 4-degree-of-freedom(DoF)robotic manipulator using learning-based Adaptive Dynamic Programming(ADP)approach.The manipulator’s dynamics are modelled as an open-loop 4-link serial kinematic chain with 4 Degrees of Freedom(DoF).Decentralised optimal controllers are designed for each link using ADP approach based on a set of cost matrices and data collected from exploration trajectories.The proposed control strategy employs an off-line,off-policy iterative approach to derive four optimal control policies,one for each joint,under exploration strategies.The objective of the controller is to control the position of each joint.Simulation and experimental results show that four independent optimal controllers are found,each under similar exploration strategies,and the proposed ADP approach successfully yields optimal linear control policies despite the presence of these complexities.The experimental results conducted on the Quanser Qarm robotic platform demonstrate the effectiveness of the proposed ADP controllers in handling significant dynamic nonlinearities,such as actuation limitations,output saturation,and filter delays.展开更多
This paper highlights the utilization of parallel control and adaptive dynamic programming(ADP) for event-triggered robust parallel optimal consensus control(ETRPOC) of uncertain nonlinear continuous-time multiagent s...This paper highlights the utilization of parallel control and adaptive dynamic programming(ADP) for event-triggered robust parallel optimal consensus control(ETRPOC) of uncertain nonlinear continuous-time multiagent systems(MASs).First, the parallel control system, which consists of a virtual control variable and a specific auxiliary variable obtained from the coupled Hamiltonian, allows general systems to be transformed into affine systems. Of interest is the fact that the parallel control technique's introduction provides an unprecedented perspective on eliminating the negative effects of disturbance. Then, an eventtriggered mechanism is adopted to save communication resources while ensuring the system's stability. The coupled HamiltonJacobi(HJ) equation's solution is approximated using a critic neural network(NN), whose weights are updated in response to events. Furthermore, theoretical analysis reveals that the weight estimation error is uniformly ultimately bounded(UUB). Finally,numerical simulations demonstrate the effectiveness of the developed ETRPOC method.展开更多
Optimal impulse control and impulse games provide the cutting-edge frameworks for modeling systems where control actions occur at discrete time points,and optimizing objectives under discontinuous interventions.This r...Optimal impulse control and impulse games provide the cutting-edge frameworks for modeling systems where control actions occur at discrete time points,and optimizing objectives under discontinuous interventions.This review synthesizes the theoretical advancements,computational approaches,emerging challenges,and possible research directions in the field.Firstly,we briefly review the fundamental theory of continuous-time optimal control,including Pontryagin's maximum principle(PMP)and dynamic programming principle(DPP).Secondly,we present the foundational results in optimal impulse control,including necessary conditions and sufficient conditions.Thirdly,we systematize impulse game methodologies,from Nash equilibrium existence theory to the connection between Nash equilibrium and systems stability.Fourthly,we summarize the numerical algorithms including the intelligent computation approaches.Finally,we examine the new trends and challenges in theory and applications as well as computational considerations.展开更多
Complex multi-area collaborative coverage path planning in dynamic environments poses a significant challenge for multi-fixed-wing UAVs(multi-UAV).This study establishes a comprehensive framework that incorporates UAV...Complex multi-area collaborative coverage path planning in dynamic environments poses a significant challenge for multi-fixed-wing UAVs(multi-UAV).This study establishes a comprehensive framework that incorporates UAV capabilities,terrain,complex areas,and mission dynamics.A novel dynamic collaborative path planning algorithm is introduced,designed to ensure complete coverage of designated areas.This algorithm meticulously optimizes the operation,entry,and transition paths for each UAV,while also establishing evaluation metrics to refine coverage sequences for each area.Additionally,a three-dimensional path is computed utilizing an altitude descent method,effectively integrating twodimensional coverage paths with altitude constraints.The efficacy of the proposed approach is validated through digital simulations and mixed-reality semi-physical experiments across a variety of dynamic scenarios,including both single-area and multi-area coverage by multi-UAV.Results show that the coverage paths generated by this method significantly reduce both computation time and path length,providing a reliable solution for dynamic multi-UAV mission planning in semi-physical environments.展开更多
This paper investigates an international optimal investmentCconsumption problem under a random time horizon.The investor may allocate wealth between a domestic bond and an international real project with production ou...This paper investigates an international optimal investmentCconsumption problem under a random time horizon.The investor may allocate wealth between a domestic bond and an international real project with production output,whose price may exhibit discontinuities.The model incorporates the effects of taxation and exchange rate dynamics,where the exchange rate follows a stochastic differential equation with jump-diffusion.The investor’s objective is to maximize the utility of consumption and terminal wealth over an uncertain investment horizon.It is worth noting that,under our framework,the exit time is not assumed to be a stopping time.In particular,for the case of constant relative risk aversion(CRRA),we derive the optimal investment and consumption strategies by applying the separation method to solve the associated HamiltonCJacobiCBellman(HJB)equation.Moreover,several numerical examples are provided to illustrate the practical applicability of the proposed results.展开更多
The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable ener...The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable energy resources, are combined together as a nonlinear, time-varying, indefinite and complex system, which is difficult to manage or optimize. Many nations have already applied the residential real-time pricing to balance the burden on their grid. In order to enhance electricity efficiency of the residential micro grid, this paper presents an action dependent heuristic dynamic programming(ADHDP) method to solve the residential energy scheduling problem. The highlights of this paper are listed below. First,the weather-type classification is adopted to establish three types of programming models based on the features of the solar energy. In addition, the priorities of different energy resources are set to reduce the loss of electrical energy transmissions.Second, three ADHDP-based neural networks, which can update themselves during applications, are designed to manage the flows of electricity. Third, simulation results show that the proposed scheduling method has effectively reduced the total electricity cost and improved load balancing process. The comparison with the particle swarm optimization algorithm further proves that the present method has a promising effect on energy management to save cost.展开更多
This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is int...This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is introduced into the feedback system.However,due to the introduction of control input into the feedback system,the optimal state feedback control methods can not be applied directly.To address this problem,an augmented system and an augmented performance index function are proposed firstly.Thus,the general nonlinear system is transformed into an affine nonlinear system.The difference between the optimal parallel control and the optimal state feedback control is analyzed theoretically.It is proven that the optimal parallel control with the augmented performance index function can be seen as the suboptimal state feedback control with the traditional performance index function.Moreover,an adaptive dynamic programming(ADP)technique is utilized to implement the optimal parallel tracking control using a critic neural network(NN)to approximate the value function online.The stability analysis of the closed-loop system is performed using the Lyapunov theory,and the tracking error and NN weights errors are uniformly ultimately bounded(UUB).Also,the optimal parallel controller guarantees the continuity of the control input under the circumstance that there are finite jump discontinuities in the reference signals.Finally,the effectiveness of the developed optimal parallel control method is verified in two cases.展开更多
基金supported by the Aeronautical Science Foundation of China(20220001057001)an Open Project of the National Key Laboratory of Air-based Information Perception and Fusion(202437)
文摘In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others'system parameters or control laws.Each player adopts an on-policy value iteration algorithm as the basic learning framework.To deal with the incomplete information structure,players collect a period of system trajectory data to compensate for the lack of information.The policy updating step is implemented by a nonlinear optimization problem aiming to search for the proximal admissible policy.Theoretical analysis shows that by adopting proximal policy searching rules,the approximated policies can converge to a neighborhood of equilibrium policies.The efficacy of our method is illustrated by three examples,which also demonstrate that the proposed method can accelerate the learning process compared with the centralized learning framework.
基金supported in part by the National Key Research and Development Program of China(2024YFB4709100,2021YFE0206100)the National Natural Science Foundation of China(62073321)+1 种基金the National Defense Basic Scientific Research Program(JCKY2019203C029)the Science and Technology Development Fund,Macao SAR,China(0015/2020/AMJ)
文摘Learning-based methods have become mainstream for solving residential energy scheduling problems. In order to improve the learning efficiency of existing methods and increase the utilization of renewable energy, we propose the Dyna actiondependent heuristic dynamic programming(Dyna-ADHDP)method, which incorporates the ideas of learning and planning from the Dyna framework in action-dependent heuristic dynamic programming. This method defines a continuous action space for precise control of an energy storage system and allows online optimization of algorithm performance during the real-time operation of the residential energy model. Meanwhile, the target network is introduced during the training process to make the training smoother and more efficient. We conducted experimental comparisons with the benchmark method using simulated and real data to verify its applicability and performance. The results confirm the method's excellent performance and generalization capabilities, as well as its excellence in increasing renewable energy utilization and extending equipment life.
文摘In many research disciplines, hypothesis tests are applied to evaluate whether findings are statistically significant or could be explained by chance. The Wilcoxon-Mann-Whitney (WMW) test is among the most popular hypothesis tests in medicine and life science to analyze if two groups of samples are equally distributed. This nonparametric statistical homogeneity test is commonly applied in molecular diagnosis. Generally, the solution of the WMW test takes a high combinatorial effort for large sample cohorts containing a significant number of ties. Hence, P value is frequently approximated by a normal distribution. We developed EDISON-WMW, a new approach to calcu- late the exact permutation of the two-tailed unpaired WMW test without any corrections required and allowing for ties. The method relies on dynamic programing to solve the combinatorial problem of the WMW test efficiently. Beyond a straightforward implementation of the algorithm, we pre- sented different optimization strategies and developed a parallel solution. Using our program, the exact P value for large cohorts containing more than 1000 samples with ties can be calculated within minutes. We demonstrate the performance of this novel approach on randomly-generated data, benchmark it against 13 other commonly-applied approaches and moreover evaluate molec- ular biomarkers for lung carcinoma and chronic obstructive pulmonary disease (COPD). We foundthat approximated P values were generally higher than the exact solution provided by EDISON- WMW. Importantly, the algorithm can also be applied to high-throughput omics datasets, where hundreds or thousands of features are included. To provide easy access to the multi-threaded version of EDISON-WMW, a web-based solution of our algorithm is freely available at http:// www.ccb.uni-saarland.de/software/wtest/.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
基金supported in part by the Science Center Program of National Natural Science Foundation of China(62373189,62188101,62020106003)the Research Fund of State Key Laboratory of Mechanics and Control for Aerospace Structures,China。
文摘In this paper,a novel adaptive Fault-Tolerant Control(FTC)strategy is proposed for non-minimum phase Hypersonic Vehicles(HSVs)that are affected by actuator faults and parameter uncertainties.The strategy is based on the output redefinition method and Adaptive Dynamic Programming(ADP).The intelligent FTC scheme consists of two main parts:a basic fault-tolerant and stable controller and an ADP-based supplementary controller.In the basic FTC part,an output redefinition approach is designed to make zero-dynamics stable with respect to the new output.Then,Ideal Internal Dynamic(IID)is obtained using an optimal bounded inversion approach,and a tracking controller is designed for the new output to realize output tracking of the nonminimum phase HSV system.For the ADP-based compensation control part,an ActionDependent Heuristic Dynamic Programming(ADHDP)adopting an actor-critic learning structure is utilized to further optimize the tracking performance of the HSV control system.Finally,simulation results are provided to verify the effectiveness and efficiency of the proposed FTC algorithm.
基金supported by the National Science Fund for Distinguished Young Scholars (62225303)the Fundamental Research Funds for the Central Universities (buctrc202201)+1 种基金China Scholarship Council,and High Performance Computing PlatformCollege of Information Science and Technology,Beijing University of Chemical Technology。
文摘In order to address the output feedback issue for linear discrete-time systems, this work suggests a brand-new adaptive dynamic programming(ADP) technique based on the internal model principle(IMP). The proposed method, termed as IMP-ADP, does not require complete state feedback-merely the measurement of input and output data. More specifically, based on the IMP, the output control problem can first be converted into a stabilization problem. We then design an observer to reproduce the full state of the system by measuring the inputs and outputs. Moreover, this technique includes both a policy iteration algorithm and a value iteration algorithm to determine the optimal feedback gain without using a dynamic system model. It is important that with this concept one does not need to solve the regulator equation. Finally, this control method was tested on an inverter system of grid-connected LCLs to demonstrate that the proposed method provides the desired performance in terms of both tracking and disturbance rejection.
基金Shaanxi Science Fund for Distinguished Young Scholars,Grant/Award Number:2024JC-JCQN-57Xi’an Science and Technology Plan Project,Grant/Award Number:2023JH-QCYJQ-0086+2 种基金Scientific Research Program Funded by Education Department of Shaanxi Provincial Government,Grant/Award Number:P23JP071Engineering Technology Research Center of Shaanxi Province for Intelligent Testing and Reliability Evaluation of Electronic Equipments,Grant/Award Number:2023-ZC-GCZX-00472022 Shaanxi University Youth Innovation Team Project。
文摘The use of dynamic programming(DP)algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large-scale networks.Therefore,this study proposes a DP algorithm based on node block sequence constraints.The proposed algorithm constrains the traversal process of the parent graph by using the M-sequence matrix to considerably reduce the time consumption and space complexity by pruning the traversal process of the order graph using the node block sequence.Experimental results show that compared with existing DP algorithms,the proposed algorithm can obtain learning results more efficiently with less than 1%loss of accuracy,and can be used for learning larger-scale networks.
基金supported by China Postdoctoral Science Foundation(Project ID:2024M762602)the National Natural Science Foundation of China under Grant No.62306232Natural Science Basic Research Program of Shaanxi Province under Grant No.2023-JC-QN-0662.
文摘This paper presents an optimized shared control algorithm for human–AI interaction, implemented through a digital twin framework where the physical system and human operator act as the real agent while an AI-driven digital system functions as the virtual agent. In this digital twin architecture, the real agent acquires an optimal control strategy through observed actions, while the AI virtual agent mirrors the real agent to establish a digital replica system and corresponding control policy. Both the real and virtual optimal controllers are approximated using reinforcement learning(RL) techniques. Specifically, critic neural networks(NNs) are employed to learn the virtual and real optimal value functions, while actor NNs are trained to derive their respective optimal controllers. A novel shared mechanism is introduced to integrate both virtual and real value functions into a unified learning framework, yielding an optimal shared controller. This controller adaptively adjusts the confidence ratio between virtual and real agents, enhancing the system's efficiency and flexibility in handling complex control tasks. The stability of the closed-loop system is rigorously analyzed using the Lyapunov method. The effectiveness of the proposed AI–human interactive system is validated through two numerical examples: a representative nonlinear system and an unmanned aerial vehicle(UAV) control system.
基金National Key Research and Development Program of China,Grant/Award Number:2021YFC2801700Defense Industrial Technology Development Program,Grant/Award Numbers:JCKY2021110B024,JCKY2022110C072+6 种基金Science and Technology Innovation 2030-“New Generation Artificial Intelligence”Major Project,Grant/Award Number:2022ZD0116305Natural Science Foundation of Hefei,China,Grant/Award Number:202321National Natural Science Foundation of China,Grant/Award Numbers:U2013601,U20A20225Yangtze River Delta S&T Innovation Community Joint Research Project,Grant/Award Number:2022CSJGG0900Anhui Province Natural Science Funds for Distinguished Young Scholar,Grant/Award Number:2308085J02State Key Laboratory of Intelligent Green Vehicle and Mobility,Grant/Award Number:KFY2417State Key Laboratory of Advanced Design and Manufacturing Technology for Vehicle,Grant/Award Number:32215010。
文摘A mixed adaptive dynamic programming(ADP)scheme based on zero-sum game theory is developed to address optimal control problems of autonomous underwater vehicle(AUV)systems subject to disturbances and safe constraints.By combining prior dynamic knowledge and actual sampled data,the proposed approach effectively mitigates the defect caused by the inaccurate dynamic model and significantly improves the training speed of the ADP algorithm.Initially,the dataset is enriched with sufficient reference data collected based on a nominal model without considering modelling bias.Also,the control object interacts with the real environment and continuously gathers adequate sampled data in the dataset.To comprehensively leverage the advantages of model-based and model-free methods during training,an adaptive tuning factor is introduced based on the dataset that possesses model-referenced information and conforms to the distribution of the real-world environment,which balances the influence of model-based control law and data-driven policy gradient on the direction of policy improvement.As a result,the proposed approach accelerates the learning speed compared to data-driven methods,concurrently also enhancing the tracking performance in comparison to model-based control methods.Moreover,the optimal control problem under disturbances is formulated as a zero-sum game,and the actor-critic-disturbance framework is introduced to approximate the optimal control input,cost function,and disturbance policy,respectively.Furthermore,the convergence property of the proposed algorithm based on the value iteration method is analysed.Finally,an example of AUV path following based on the improved line-of-sight guidance is presented to demonstrate the effectiveness of the proposed method.
文摘Integral reinforcement learning(IRL)is an effective tool for solving optimal control problems of nonlinear systems,and it has been widely utilized in optimal controller design for solving discrete-time nonlinearity.However,solving the Hamilton-Jacobi-Bellman(HJB)equations for nonlinear systems requires precise and complicated dynamics.Moreover,the research and application of IRL in continuous-time(CT)systems must be further improved.To develop the IRL of a CT nonlinear system,a data-based adaptive neural dynamic programming(ANDP)method is proposed to investigate the optimal control problem of uncertain CT multi-input systems such that the knowledge of the dynamics in the HJB equation is unnecessary.First,the multi-input model is approximated using a neural network(NN),which can be utilized to design an integral reinforcement signal.Subsequently,two criterion networks and one action network are constructed based on the integral reinforcement signal.A nonzero-sum Nash equilibrium can be reached by learning the optimal strategies of the multi-input model.In this scheme,the NN weights are constantly updated using an adaptive algorithm.The weight convergence and the system stability are analyzed in detail.The optimal control problem of a multi-input nonlinear CT system is effectively solved using the ANDP scheme,and the results are verified by a simulation study.
基金Project supported by the National Natural Science Foundation of China (Grant No. 62103408)Beijing Nova Program (Grant No. 20240484516)the Fundamental Research Funds for the Central Universities (Grant No. KG16314701)。
文摘The paper develops a robust control approach for nonaffine nonlinear continuous systems with input constraints and unknown uncertainties. Firstly, this paper constructs an affine augmented system(AAS) within a pre-compensation technique for converting the original nonaffine dynamics into affine dynamics. Secondly, the paper derives a stability criterion linking the original nonaffine system and the auxiliary system, demonstrating that the obtained optimal policies from the auxiliary system can achieve the robust controller of the nonaffine system. Thirdly, an online adaptive dynamic programming(ADP) algorithm is designed for approximating the optimal solution of the Hamilton–Jacobi–Bellman(HJB) equation.Moreover, the gradient descent approach and projection approach are employed for updating the actor-critic neural network(NN) weights, with the algorithm's convergence being proven. Then, the uniformly ultimately bounded stability of state is guaranteed. Finally, in simulation, some examples are offered for validating the effectiveness of this presented approach.
基金supported by the DEEPCOBOT project under Grant 306640/O70 funded by the Research Council of Norway.
文摘This paper studies motor joint control of a 4-degree-of-freedom(DoF)robotic manipulator using learning-based Adaptive Dynamic Programming(ADP)approach.The manipulator’s dynamics are modelled as an open-loop 4-link serial kinematic chain with 4 Degrees of Freedom(DoF).Decentralised optimal controllers are designed for each link using ADP approach based on a set of cost matrices and data collected from exploration trajectories.The proposed control strategy employs an off-line,off-policy iterative approach to derive four optimal control policies,one for each joint,under exploration strategies.The objective of the controller is to control the position of each joint.Simulation and experimental results show that four independent optimal controllers are found,each under similar exploration strategies,and the proposed ADP approach successfully yields optimal linear control policies despite the presence of these complexities.The experimental results conducted on the Quanser Qarm robotic platform demonstrate the effectiveness of the proposed ADP controllers in handling significant dynamic nonlinearities,such as actuation limitations,output saturation,and filter delays.
基金supported in part by the National Key Research and Development Program of China(2021YFE0206100)the National Natural Science Foundation of China(62425310,62073321)+2 种基金the National Defense Basic Scientific Research Program(JCKY2019203C029,JCKY2020130C025)the Science and Technology Development FundMacao SAR(FDCT-22-009-MISE,0060/2021/A2,0015/2020/AMJ)
文摘This paper highlights the utilization of parallel control and adaptive dynamic programming(ADP) for event-triggered robust parallel optimal consensus control(ETRPOC) of uncertain nonlinear continuous-time multiagent systems(MASs).First, the parallel control system, which consists of a virtual control variable and a specific auxiliary variable obtained from the coupled Hamiltonian, allows general systems to be transformed into affine systems. Of interest is the fact that the parallel control technique's introduction provides an unprecedented perspective on eliminating the negative effects of disturbance. Then, an eventtriggered mechanism is adopted to save communication resources while ensuring the system's stability. The coupled HamiltonJacobi(HJ) equation's solution is approximated using a critic neural network(NN), whose weights are updated in response to events. Furthermore, theoretical analysis reveals that the weight estimation error is uniformly ultimately bounded(UUB). Finally,numerical simulations demonstrate the effectiveness of the developed ETRPOC method.
文摘Optimal impulse control and impulse games provide the cutting-edge frameworks for modeling systems where control actions occur at discrete time points,and optimizing objectives under discontinuous interventions.This review synthesizes the theoretical advancements,computational approaches,emerging challenges,and possible research directions in the field.Firstly,we briefly review the fundamental theory of continuous-time optimal control,including Pontryagin's maximum principle(PMP)and dynamic programming principle(DPP).Secondly,we present the foundational results in optimal impulse control,including necessary conditions and sufficient conditions.Thirdly,we systematize impulse game methodologies,from Nash equilibrium existence theory to the connection between Nash equilibrium and systems stability.Fourthly,we summarize the numerical algorithms including the intelligent computation approaches.Finally,we examine the new trends and challenges in theory and applications as well as computational considerations.
基金National Natural Science Foundation of China(Grant No.52472417)to provide fund for conducting experiments.
文摘Complex multi-area collaborative coverage path planning in dynamic environments poses a significant challenge for multi-fixed-wing UAVs(multi-UAV).This study establishes a comprehensive framework that incorporates UAV capabilities,terrain,complex areas,and mission dynamics.A novel dynamic collaborative path planning algorithm is introduced,designed to ensure complete coverage of designated areas.This algorithm meticulously optimizes the operation,entry,and transition paths for each UAV,while also establishing evaluation metrics to refine coverage sequences for each area.Additionally,a three-dimensional path is computed utilizing an altitude descent method,effectively integrating twodimensional coverage paths with altitude constraints.The efficacy of the proposed approach is validated through digital simulations and mixed-reality semi-physical experiments across a variety of dynamic scenarios,including both single-area and multi-area coverage by multi-UAV.Results show that the coverage paths generated by this method significantly reduce both computation time and path length,providing a reliable solution for dynamic multi-UAV mission planning in semi-physical environments.
基金Supported by the Shandong Provincial Natural Science Foundation(ZR2024MA095)Natural Science Foun-dation of China(12401583)Basic Research Program of Jiangsu(BK20240416).
文摘This paper investigates an international optimal investmentCconsumption problem under a random time horizon.The investor may allocate wealth between a domestic bond and an international real project with production output,whose price may exhibit discontinuities.The model incorporates the effects of taxation and exchange rate dynamics,where the exchange rate follows a stochastic differential equation with jump-diffusion.The investor’s objective is to maximize the utility of consumption and terminal wealth over an uncertain investment horizon.It is worth noting that,under our framework,the exit time is not assumed to be a stopping time.In particular,for the case of constant relative risk aversion(CRRA),we derive the optimal investment and consumption strategies by applying the separation method to solve the associated HamiltonCJacobiCBellman(HJB)equation.Moreover,several numerical examples are provided to illustrate the practical applicability of the proposed results.
基金supported in part by the National Natural Science Foundation of China(61533017,U1501251,61374105,61722312)
文摘The residential energy scheduling of solar energy is an important research area of smart grid. On the demand side, factors such as household loads, storage batteries, the outside public utility grid and renewable energy resources, are combined together as a nonlinear, time-varying, indefinite and complex system, which is difficult to manage or optimize. Many nations have already applied the residential real-time pricing to balance the burden on their grid. In order to enhance electricity efficiency of the residential micro grid, this paper presents an action dependent heuristic dynamic programming(ADHDP) method to solve the residential energy scheduling problem. The highlights of this paper are listed below. First,the weather-type classification is adopted to establish three types of programming models based on the features of the solar energy. In addition, the priorities of different energy resources are set to reduce the loss of electrical energy transmissions.Second, three ADHDP-based neural networks, which can update themselves during applications, are designed to manage the flows of electricity. Third, simulation results show that the proposed scheduling method has effectively reduced the total electricity cost and improved load balancing process. The comparison with the particle swarm optimization algorithm further proves that the present method has a promising effect on energy management to save cost.
基金supported in part by the National Key Reseanch and Development Program of China(2018AAA0101502,2018YFB1702300)in part by the National Natural Science Foundation of China(61722312,61533019,U1811463,61533017)in part by the Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles。
文摘This paper studies the problem of optimal parallel tracking control for continuous-time general nonlinear systems.Unlike existing optimal state feedback control,the control input of the optimal parallel control is introduced into the feedback system.However,due to the introduction of control input into the feedback system,the optimal state feedback control methods can not be applied directly.To address this problem,an augmented system and an augmented performance index function are proposed firstly.Thus,the general nonlinear system is transformed into an affine nonlinear system.The difference between the optimal parallel control and the optimal state feedback control is analyzed theoretically.It is proven that the optimal parallel control with the augmented performance index function can be seen as the suboptimal state feedback control with the traditional performance index function.Moreover,an adaptive dynamic programming(ADP)technique is utilized to implement the optimal parallel tracking control using a critic neural network(NN)to approximate the value function online.The stability analysis of the closed-loop system is performed using the Lyapunov theory,and the tracking error and NN weights errors are uniformly ultimately bounded(UUB).Also,the optimal parallel controller guarantees the continuity of the control input under the circumstance that there are finite jump discontinuities in the reference signals.Finally,the effectiveness of the developed optimal parallel control method is verified in two cases.