This paper presents a new three-level hierarchical control parallel algorithm for large-scale systems by spatial and time decomposition. The parallel variable metric (PVM)method is found to be promising third-level al...This paper presents a new three-level hierarchical control parallel algorithm for large-scale systems by spatial and time decomposition. The parallel variable metric (PVM)method is found to be promising third-level algorithm. In the subproblems of second-level, the constraints of the smaller subproblem requires that the initial state of a subproblem equals the terminal state of the preceding subproblem. The coordinating variables are updated using the modified Newton method. the low-level smaller subproblems are solved in parallel using extended differential dynamic programmeing (DDP). Numerical result shows that comparing with one level DDP. the PVM /DDP algorithm obtains significant speed-ups.展开更多
An optimal feedback guidance law with disturbance rejection objective is proposed for endoatmospheric powered descent.This guidance law with an affine form is derived by solving a novel problem called Endoatmospheric ...An optimal feedback guidance law with disturbance rejection objective is proposed for endoatmospheric powered descent.This guidance law with an affine form is derived by solving a novel problem called Endoatmospheric Powered Descent Guidance with Disturbance Rejection(Endo-PDG-DR).The key idea of formulating the Endo-PDG-DR problem is dividing disturbances into two parts,modeled and unmodeled disturbances:the modeled disturbance is proactively exploited by augmenting it as a new state of a dynamics model;the unmodeled disturbance is reactively attenuated in terms of its effect on the guidance performance by adjoining a parameterized time-varying quadratic performance index in the proposed optimal guidance problem.A Pseudospectral Differential Dynamic Programming(PDDP)method is developed to solve the Endo-PDG-DR problem,and correspondingly a robust neighboring optimal state feedback law is obtained,which has two synergistic functionalities.One is adaptive optimal steering to accommodate the modeled disturbance,and the other is disturbance attenuation to compensate for the state perturbation effect induced by the unmodeled disturbance.Using the derived feedback guidance law,a disturbance rejection level is quantified,and is correspondingly optimized by designing a quadratic weighting parameter tuning law.The numerical computations of interest are performed within a pseudospectral setting,ensuring polynomial analytical solution,high computational efficiency,and reliable convergence.展开更多
In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the d...In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bettman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of E1 Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the LP-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.展开更多
文摘This paper presents a new three-level hierarchical control parallel algorithm for large-scale systems by spatial and time decomposition. The parallel variable metric (PVM)method is found to be promising third-level algorithm. In the subproblems of second-level, the constraints of the smaller subproblem requires that the initial state of a subproblem equals the terminal state of the preceding subproblem. The coordinating variables are updated using the modified Newton method. the low-level smaller subproblems are solved in parallel using extended differential dynamic programmeing (DDP). Numerical result shows that comparing with one level DDP. the PVM /DDP algorithm obtains significant speed-ups.
基金co-supported by the National Natural Science Foundation of China(No.62103014)。
文摘An optimal feedback guidance law with disturbance rejection objective is proposed for endoatmospheric powered descent.This guidance law with an affine form is derived by solving a novel problem called Endoatmospheric Powered Descent Guidance with Disturbance Rejection(Endo-PDG-DR).The key idea of formulating the Endo-PDG-DR problem is dividing disturbances into two parts,modeled and unmodeled disturbances:the modeled disturbance is proactively exploited by augmenting it as a new state of a dynamics model;the unmodeled disturbance is reactively attenuated in terms of its effect on the guidance performance by adjoining a parameterized time-varying quadratic performance index in the proposed optimal guidance problem.A Pseudospectral Differential Dynamic Programming(PDDP)method is developed to solve the Endo-PDG-DR problem,and correspondingly a robust neighboring optimal state feedback law is obtained,which has two synergistic functionalities.One is adaptive optimal steering to accommodate the modeled disturbance,and the other is disturbance attenuation to compensate for the state perturbation effect induced by the unmodeled disturbance.Using the derived feedback guidance law,a disturbance rejection level is quantified,and is correspondingly optimized by designing a quadratic weighting parameter tuning law.The numerical computations of interest are performed within a pseudospectral setting,ensuring polynomial analytical solution,high computational efficiency,and reliable convergence.
基金supported by the Agence Nationale de la Recherche (France), reference ANR-10-BLAN 0112the Marie Curie ITN "Controlled Systems", call: FP7-PEOPLE-2007-1-1-ITN, no. 213841-2+3 种基金supported by the National Natural Science Foundation of China (No. 10701050, 11071144)National Basic Research Program of China (973 Program) (No. 2007CB814904)Shandong Province (No. Q2007A04),Independent Innovation Foundation of Shandong Universitythe Project-sponsored by SRF for ROCS, SEM
文摘In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bettman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of E1 Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the LP-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.