In this paper,we present a successive quadratic programming(SQP)method for minimizing a class of nonsmooth functions,which are the sum of a convex function and a nonsmooth composite function.The method generates new i...In this paper,we present a successive quadratic programming(SQP)method for minimizing a class of nonsmooth functions,which are the sum of a convex function and a nonsmooth composite function.The method generates new iterations by using the Armijo-type line search technique after having found the search directions.Global convergence property is established under mild assumptions.Numerical results are also offered.展开更多
In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover,...In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover, the proposed method requires no more function or derivative evaluations and hardly more storage or arithmetic operations. Under suitable conditions, the global convergence is established.展开更多
In this paper we present a filter-trust-region algorithm for solving LC1 unconstrained optimization problems which uses the second Dini upper directional derivative. We establish the global convergence of the algorith...In this paper we present a filter-trust-region algorithm for solving LC1 unconstrained optimization problems which uses the second Dini upper directional derivative. We establish the global convergence of the algorithm under reasonable assumptions.展开更多
In this paper,the new SQP feasible descent algorithm for nonlinear constrained optimization problems presented,and under weaker conditions of relative,we proofed the new method still possesses global convergence and i...In this paper,the new SQP feasible descent algorithm for nonlinear constrained optimization problems presented,and under weaker conditions of relative,we proofed the new method still possesses global convergence and its strong convergence.The numerical results illustrate that the new methods are valid.展开更多
A class of trust region methods for solving linear inequality constrained problems is proposed in this paper. It is shown that the algorithm is of global convergence.The algorithm uses a version of the two-sided proje...A class of trust region methods for solving linear inequality constrained problems is proposed in this paper. It is shown that the algorithm is of global convergence.The algorithm uses a version of the two-sided projection and the strategy of the unconstrained trust region methods. It keeps the good convergence properties of the unconstrained case and has the merits of the projection method. In some sense, our algorithm can be regarded as an extension and improvement of the projected type algorithm.展开更多
In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line...In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.展开更多
A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated...A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain first-order necessary condition even if the original problem is itself infeasible. Under certain condition, some global convergence results are proved and local superlinear convergence results are also obtained. Preliminary numerical results are reported.展开更多
A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possibl...A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.展开更多
This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are glob...This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are globally convergent for general convex functions.展开更多
This paper presents a trust region two phase model algorithm for solving the equality and bound constrained nonlinear optimization problem. A concept of substationary point is given. Under suitable assumptions,the gl...This paper presents a trust region two phase model algorithm for solving the equality and bound constrained nonlinear optimization problem. A concept of substationary point is given. Under suitable assumptions,the global convergence of this algorithm is proved without assuming the linear independence of the gradient of active constraints. A numerical example is also presented.展开更多
In this paper, we use the smoothing penalty function proposed in [1] as the merit function of SQP method for nonlinear optimization with inequality constraints. The global convergence of the method is obtained.
In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the ...In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.展开更多
This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on a...This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.展开更多
A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assum...A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.展开更多
In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search directi...In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.展开更多
Nonlinear conjugate gradient methods have played an important role in solving large scale unconstrained optimi-zation problems,it is characterized by the simplicity of their iteration and their low memory requirements...Nonlinear conjugate gradient methods have played an important role in solving large scale unconstrained optimi-zation problems,it is characterized by the simplicity of their iteration and their low memory requirements.It is well-known that the direction generated by a conjugate gradient method may be not a descent direction.In this paper,a new class of nonlinear conjugate gradient method is presented,its search direction is a descent direction for the objective function.If the objective function is differentiable and its gradient is Lipschitz continuous,the line sbarch satisfies strong Wolfe condition,the global convergence result is established.展开更多
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien...In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient.展开更多
Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new de...Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.展开更多
The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumptio...The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.展开更多
文摘In this paper,we present a successive quadratic programming(SQP)method for minimizing a class of nonsmooth functions,which are the sum of a convex function and a nonsmooth composite function.The method generates new iterations by using the Armijo-type line search technique after having found the search directions.Global convergence property is established under mild assumptions.Numerical results are also offered.
文摘In this paper, a modified variation of the Limited SQP method is presented for constrained optimization. This method possesses not only the information of gradient but also the information of function value. Moreover, the proposed method requires no more function or derivative evaluations and hardly more storage or arithmetic operations. Under suitable conditions, the global convergence is established.
基金Supported by CERG: CityU 101005 of the Government of Hong Kong SAR, Chinathe National Natural ScienceFoundation of China, the Specialized Research Fund of Doctoral Program of Higher Education of China (Grant No.20040319003)the Natural Science Fund of Jiangsu Province of China (Grant No. BK2006214)
文摘In this paper we present a filter-trust-region algorithm for solving LC1 unconstrained optimization problems which uses the second Dini upper directional derivative. We establish the global convergence of the algorithm under reasonable assumptions.
基金Supported by the NNSF of China(10231060)Supported by the Soft Science Foundation of Henan Province(082400430820)
文摘In this paper,the new SQP feasible descent algorithm for nonlinear constrained optimization problems presented,and under weaker conditions of relative,we proofed the new method still possesses global convergence and its strong convergence.The numerical results illustrate that the new methods are valid.
文摘A class of trust region methods for solving linear inequality constrained problems is proposed in this paper. It is shown that the algorithm is of global convergence.The algorithm uses a version of the two-sided projection and the strategy of the unconstrained trust region methods. It keeps the good convergence properties of the unconstrained case and has the merits of the projection method. In some sense, our algorithm can be regarded as an extension and improvement of the projected type algorithm.
文摘In this paper we propose a new family of curve search methods for unconstrained optimization problems, which are based on searching a new iterate along a curve through the current iterate at each iteration, while line search methods are based on finding a new iterate on a line starting from the current iterate at each iteration. The global convergence and linear convergence rate of these curve search methods are investigated under some mild conditions. Numerical results show that some curve search methods are stable and effective in solving some large scale minimization problems.
基金This work is supported in part by the National Natural Science Foundation of China (Grant No. 10171055).
文摘A new algorithm for inequality constrained optimization is presented, which solves a linear programming subproblem and a quadratic subproblem at each iteration. The algorithm can circumvent the difficulties associated with the possible inconsistency of QP subproblem of the original SQP method. Moreover, the algorithm can converge to a point which satisfies a certain first-order necessary condition even if the original problem is itself infeasible. Under certain condition, some global convergence results are proved and local superlinear convergence results are also obtained. Preliminary numerical results are reported.
基金This research is supportedin part by the National Natural Science Foundation ofChina(Grant No. 39830070).
文摘A robust SQP method, which is analogous to Facchinei’s algorithm, is introduced. The algorithm is globally convergent. It uses automatic rules for choosing penalty parameter, and can efficiently cope with the possible inconsistency of the quadratic search subproblem. In addition, the algorithm employs a differentiable approximate exact penalty function as a merit function. Unlike the merit function in Facchinei’s algorithm, which is quite complicated and is not easy to be implemented in practice, this new merit function is very simple. As a result, we can use the Facchinei’s idea to construct an algorithm which is easy to be implemented in practice.
文摘This paper explores the convergence of a class of optimally conditioned self scaling variable metric (OCSSVM) methods for unconstrained optimization. We show that this class of methods with Wolfe line search are globally convergent for general convex functions.
文摘This paper presents a trust region two phase model algorithm for solving the equality and bound constrained nonlinear optimization problem. A concept of substationary point is given. Under suitable assumptions,the global convergence of this algorithm is proved without assuming the linear independence of the gradient of active constraints. A numerical example is also presented.
基金Supported by the National Natural Sciences Foundation of China (No.39830070 and 10171055).
文摘In this paper, a new SQP method for inequality constrained optimization is proposed and the global convergence is obtained under very mild conditions.
基金This research is supported in part by the National Natural Science Foundation of China(No. 39830070).
文摘In this paper, we use the smoothing penalty function proposed in [1] as the merit function of SQP method for nonlinear optimization with inequality constraints. The global convergence of the method is obtained.
文摘In this paper, the non-quasi-Newton's family with inexact line search applied to unconstrained optimization problems is studied. A new update formula for non-quasi-Newton's family is proposed. It is proved that the constituted algorithm with either Wolfe-type or Armijotype line search converges globally and Q-superlinearly if the function to be minimized has Lipschitz continuous gradient.
基金Supported by 2023 Inner Mongolia University of Finance and Economics,General Scientific Research for Universities directly under Inner Mon‐golia,China (NCYWT23026)2024 High-quality Research Achievements Cultivation Fund Project of Inner Mongolia University of Finance and Economics,China (GZCG2479)。
文摘This paper puts forward a two-parameter family of nonlinear conjugate gradient(CG)method without line search for solving unconstrained optimization problem.The main feature of this method is that it does not rely on any line search and only requires a simple step size formula to always generate a sufficient descent direction.Under certain assumptions,the proposed method is proved to possess global convergence.Finally,our method is compared with other potential methods.A large number of numerical experiments show that our method is more competitive and effective.
基金Supported by the National Natural Science Foundation of P.R.China(1 9971 0 0 2 ) and the Subject ofBeijing Educational Committ
文摘A new trust region algorithm for solving convex LC 1 optimization problem is presented.It is proved that the algorithm is globally convergent and the rate of convergence is superlinear under some reasonable assumptions.
文摘In this paper, we extend a descent algorithm without line search for solving unconstrained optimization problems. Under mild conditions, its global convergence is established. Further, we generalize the search direction to more general form, and also obtain the global convergence of corresponding algorithm. The numerical results illustrate that the new algorithm is effective.
文摘Nonlinear conjugate gradient methods have played an important role in solving large scale unconstrained optimi-zation problems,it is characterized by the simplicity of their iteration and their low memory requirements.It is well-known that the direction generated by a conjugate gradient method may be not a descent direction.In this paper,a new class of nonlinear conjugate gradient method is presented,its search direction is a descent direction for the objective function.If the objective function is differentiable and its gradient is Lipschitz continuous,the line sbarch satisfies strong Wolfe condition,the global convergence result is established.
基金Supported by the Science and Technology Project of Guangxi(Guike AD23023002)。
文摘In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient.
基金Supported by The Youth Project Foundation of Chongqing Three Gorges University(13QN17)Supported by the Fund of Scientific Research in Southeast University(the Support Project of Fundamental Research)
文摘Y Liu and C Storey(1992)proposed the famous LS conjugate gradient method which has good numerical results.However,the LS method has very weak convergence under the Wolfe-type line search.In this paper,we give a new descent gradient method based on the LS method.It can guarantee the sufficient descent property at each iteration and the global convergence under the strong Wolfe line search.Finally,we also present extensive preliminary numerical experiments to show the efficiency of the proposed method by comparing with the famous PRP^+method.
基金Sponsored by Natural Science Foundation of Beijing Municipal Commission of Education(Grant No.KM200510028019).
文摘The non-quasi-Newton methods for unconstrained optimization was investigated. Non-monotone line search procedure is introduced, which is combined with the non-quasi-Newton family. Under the uniform convexity assumption on objective function, the global convergence of the non-quasi-Newton family was proved. Numerical experiments showed that the non-monotone line search was more effective.