In this paper, we describe a successive approximation and smooth sequential quadratic programming (SQP) method for mathematical programs with nonlinear complementarity constraints (MPCC). We introduce a class of s...In this paper, we describe a successive approximation and smooth sequential quadratic programming (SQP) method for mathematical programs with nonlinear complementarity constraints (MPCC). We introduce a class of smooth programs to approximate the MPCC. Using an 11 penalty function, the line search assures global convergence, while the superlinear convergence rate is shown under the strictly complementary and second-order sufficient conditions. Moreover, we prove that the current iterated point is an exact stationary point of the mathematical programs with equilibrium constraints (MPEC) when the algorithm terminates finitely.展开更多
This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal p...This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.展开更多
In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image pro...In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.展开更多
A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimen...A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.展开更多
Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decre...Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decreasing gradient norms; Discussion on the assumption given; Applications of algorithms and numerical tests.展开更多
In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction...In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction. In view of the computational cost, the most attractive feature of the new algorithm is that only one system of linear equations is required to obtain the revised feasible descent direction. Thereby, per single iteration, it is only necessary to solve three systems of linear equations with the same coefficient matrix. In particular, without the positive definiteness assumption on the Hessian estimate, the proposed algorithm is still global convergence. Under some suitable conditions, the superlinear convergence rate is obtained.展开更多
Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In thi...Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.展开更多
基金supported by the National Natural Science Foundation of China (Nos.10501009,10771040)the Natural Science Foundation of Guangxi Province of China (Nos.0728206,0640001)the China Postdoctoral Science Foundation (No.20070410228)
文摘In this paper, we describe a successive approximation and smooth sequential quadratic programming (SQP) method for mathematical programs with nonlinear complementarity constraints (MPCC). We introduce a class of smooth programs to approximate the MPCC. Using an 11 penalty function, the line search assures global convergence, while the superlinear convergence rate is shown under the strictly complementary and second-order sufficient conditions. Moreover, we prove that the current iterated point is an exact stationary point of the mathematical programs with equilibrium constraints (MPEC) when the algorithm terminates finitely.
文摘This paper presents a variant algorithm of Goldfarb’s method for linearlyconstrained optimization problems. In the variant algorithm, we introduce a concept calledconjugate projection, which differs from orthogonal projection. The variant algorithm hasglobal convergence, superlinear convergence rate.
基金supported by the National Natural Science Foundation of China(No.11901368).
文摘In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.
文摘A Quasi-Newton method in Infinite-dimensional Spaces (QNIS) for solving operator equations is presellted and the convergence of a sequence generated by QNIS is also proved in the paper. Next, we suggest a finite-dimensional implementation of QNIS and prove that the sequence defined by the finite-dimensional algorithm converges to the root of the original operator equation providing that the later exists and that the Frechet derivative of the governing operator is invertible. Finally, we apply QNIS to an inverse problem for a parabolic differential equation to illustrate the efficiency of the finite-dimensional algorithm.
基金This research is supported by Ministry of Education P.R.C. Asia-Pacific Operations Research Center (APORC).
文摘Presents information on a study which proposed a type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems. Theorems on inexact generalized Newton algorithm with decreasing gradient norms; Discussion on the assumption given; Applications of algorithms and numerical tests.
基金Supported by National Natural Science Foundation of China (Grant Nos. 11061011 and 71061002)Guangxi Fund for Distinguished Young Scholars (2012GXSFFA060003)
文摘In this paper, an improved feasible QP-free method is proposed to solve nonlinear inequality constrained optimization problems. Here, a new modified method is presented to obtain the revised feasible descent direction. In view of the computational cost, the most attractive feature of the new algorithm is that only one system of linear equations is required to obtain the revised feasible descent direction. Thereby, per single iteration, it is only necessary to solve three systems of linear equations with the same coefficient matrix. In particular, without the positive definiteness assumption on the Hessian estimate, the proposed algorithm is still global convergence. Under some suitable conditions, the superlinear convergence rate is obtained.
基金This research is supported by the Research and Development Foundation of Shanghai Education Commission and Asia-Pacific Operatio
文摘Quasi-Newton (QN) equation plays a core role in contemporary nonlinear optimization. The traditional QN equation employs only the gradients, but ignores the function value information, which seems unreasonable. In this paper, we consider a class of DFP method with new QN equations which use both gradient and function value infor- mation and ask very little additional computation. We give the condition of convergence and superlinear convergence for these methods. We also prove that under some line search conditions the DFP method with new QN equations is convergeot and superlinearly con- vergent.