In this paper,the nonlinear complementarity problem is transformed into the least squares problem with nonnegative constraints,and a SQP algorithm for this reformulation based on a damped Gauss Newton type method is ...In this paper,the nonlinear complementarity problem is transformed into the least squares problem with nonnegative constraints,and a SQP algorithm for this reformulation based on a damped Gauss Newton type method is presented.It is shown that the algorithm is globally and locally superlinearly (quadratically) convergent without the assumption of monotonicity.展开更多
In this paper,we present an SQP-type proximal gradient method(SQP-PG)for composite optimization problems with equality constraints.At each iteration,SQP-PG solves a subproblem to get the search direction,and takes an ...In this paper,we present an SQP-type proximal gradient method(SQP-PG)for composite optimization problems with equality constraints.At each iteration,SQP-PG solves a subproblem to get the search direction,and takes an exact penalty function as the merit function to determine if the trial step is accepted.The global convergence of the SQP-PG method is proved and the iteration complexity for obtaining an-stationary point is analyzed.We also establish the local linear convergence result of the SQP-PG method under the second-order sufficient condition.Numerical results demonstrate that,compared to the state-of-the-art algorithms,SQP-PG is an effective method for equality constrained composite optimization problems.展开更多
A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated...A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain first-order necessary condition even if the original problem is itself infeasible. Under certain condition, some global convergence results are proved and local superlinear convergence results are also obtained. Preliminary numerical results are reported.展开更多
A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possibl...A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.展开更多
基金Supported by the National Natural Science Foundation of China(1 9971 0 0 2 )
文摘In this paper,the nonlinear complementarity problem is transformed into the least squares problem with nonnegative constraints,and a SQP algorithm for this reformulation based on a damped Gauss Newton type method is presented.It is shown that the algorithm is globally and locally superlinearly (quadratically) convergent without the assumption of monotonicity.
基金supported by the National Natural Science Foundation of China(Grant No.72394365).
文摘In this paper,we present an SQP-type proximal gradient method(SQP-PG)for composite optimization problems with equality constraints.At each iteration,SQP-PG solves a subproblem to get the search direction,and takes an exact penalty function as the merit function to determine if the trial step is accepted.The global convergence of the SQP-PG method is proved and the iteration complexity for obtaining an-stationary point is analyzed.We also establish the local linear convergence result of the SQP-PG method under the second-order sufficient condition.Numerical results demonstrate that,compared to the state-of-the-art algorithms,SQP-PG is an effective method for equality constrained composite optimization problems.
基金This work is supported in part by the National Natural Science Foundation of China (Grant No. 10171055).
文摘A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain first-order necessary condition even if the original problem is itself infeasible. Under certain condition, some global convergence results are proved and local superlinear convergence results are also obtained. Preliminary numerical results are reported.
基金This research is supportedin part by the National Natural Science Foundation ofChina(Grant No. 39830070).
文摘A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.
基金Supported by the National Natural Sciences Foundation of China (No.39830070 and 10171055).
文摘In this paper, a new SQP method for inequality constrained optimization is proposed and the global convergence is obtained under very mild conditions.