期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
AN ADAPTIVE TRUST REGION METHOD FOR EQUALITY CONSTRAINED OPTIMIZATION 被引量:1
1
作者 ZHANGJuliang ZHANGXiangstm ZHUOXinjian 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2003年第4期494-505,共12页
In this paper, a trust region method for equality constrained optimizationbased on nondifferentiable exact penalty is proposed. In this algorithm, the trail step ischaracterized by computation of its normal component ... In this paper, a trust region method for equality constrained optimizationbased on nondifferentiable exact penalty is proposed. In this algorithm, the trail step ischaracterized by computation of its normal component being separated from computation of itstangential component, i.e., only the tangential component of the trail step is constrained by trustradius while the normal component and trail step itself have no constraints. The other maincharacteristic of the algorithm is the decision of trust region radius. Here, the decision of trustregion radius uses the information of the gradient of objective function and reduced Hessian.However, Maratos effect will occur when we use the nondifferentiable exact penalty function as themerit function. In order to obtain the superlinear convergence of the algorithm, we use the twiceorder correction technique. Because of the speciality of the adaptive trust region method, we usetwice order correction when p = 0 (the definition is as in Section 2) and this is different from thetraditional trust region methods for equality constrained optimization. So the computation of thealgorithm in this paper is reduced. What is more, we can prove that the algorithm is globally andsuperlinearly convergent. 展开更多
关键词 equality constrained optimization global convergence trust region method superlinear convergence nondifferentiable exact penalty function maratos effect
原文传递
Predictor-Corrector Smoothing Methods for Monotone LCP
2
作者 Ju-liangZhang Xiang-sunZhang Yong-meiSu 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2004年第4期557-572,共16页
In this paper, we analyze the global and local convergence properties of two predictor-corrector smoothing methods, which are based on the framework of the method in [1], for monotone linear complementarity problems (... In this paper, we analyze the global and local convergence properties of two predictor-corrector smoothing methods, which are based on the framework of the method in [1], for monotone linear complementarity problems (LCPs). The difference between the algorithm in [1] and our algorithms is that the neighborhood of smoothing central path in our paper is different to that in [1]. In addition, the difference between Algorithm 2.1 and the algorithm in [1] exists in the calculation of the predictor step. Comparing with the results in [1], the global and local convergence of the two methods can be obtained under very mild conditions. The global convergence of the two methods do not need the boundness of the inverse of the Jacobian. The superlinear convergence of Algorithm 2.1&#8242; is obtained under the assumption of nonsingularity of generalized Jacobian of &#966;(x, y) at the limit point and Algorithm 2.1 obtains superlinear convergence under the assumption of strict complementarity at the solution. The effciency of the two methods is tested by numerical experiments. 展开更多
关键词 Monotone lcp predictor-corrector method smoothing methods global convergence quadratical convergence
原文传递
A CONSTRAINED OPTIMIZATION APPROACH FOR LCP
3
作者 Ju-liangZhang JianChen Xin-jianZhuo 《Journal of Computational Mathematics》 SCIE CSCD 2004年第4期509-522,共14页
In this paper, LCP is converted to an equivalent nonsmooth nonlinear equation system H(x,y) = 0 by using the famous NCP function-Fischer-Burmeister function. Note that some equations in H(x, y) = 0 are nonsmooth and n... In this paper, LCP is converted to an equivalent nonsmooth nonlinear equation system H(x,y) = 0 by using the famous NCP function-Fischer-Burmeister function. Note that some equations in H(x, y) = 0 are nonsmooth and nonlinear hence difficult to solve while the others are linear hence easy to solve. Then we further convert the nonlinear equation system H(x, y) = 0 to an optimization problem with linear equality constraints. After that we study the conditions under which the K-T points of the optimization problem are the solutions of the original LCP and propose a method to solve the optimization problem. In this algorithm, the search direction is obtained by solving a strict convex programming at each iterative point, However, our algorithm is essentially different from traditional SQP method. The global convergence of the method is proved under mild conditions. In addition, we can prove that the algorithm is convergent superlinearly under the conditions: M is P0 matrix and the limit point is a strict complementarity solution of LCP. Preliminary numerical experiments are reported with this method. 展开更多
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部