期刊文献+
共找到187篇文章
< 1 2 10 >
每页显示 20 50 100
Almost Sure Convergence of Proximal Stochastic Accelerated Gradient Methods
1
作者 Xin Xiang Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第4期1321-1336,共16页
Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stocha... Proximal gradient descent and its accelerated version are resultful methods for solving the sum of smooth and non-smooth problems. When the smooth function can be represented as a sum of multiple functions, the stochastic proximal gradient method performs well. However, research on its accelerated version remains unclear. This paper proposes a proximal stochastic accelerated gradient (PSAG) method to address problems involving a combination of smooth and non-smooth components, where the smooth part corresponds to the average of multiple block sums. Simultaneously, most of convergence analyses hold in expectation. To this end, under some mind conditions, we present an almost sure convergence of unbiased gradient estimation in the non-smooth setting. Moreover, we establish that the minimum of the squared gradient mapping norm arbitrarily converges to zero with probability one. 展开更多
关键词 Proximal Stochastic Accelerated Method Almost Sure Convergence Composite Optimization Non-Smooth Optimization Stochastic Optimization Accelerated gradient Method
在线阅读 下载PDF
Online Gradient Methods with a Punishing Term for Neural Networks 被引量:2
2
作者 孔俊 吴微 《Northeastern Mathematical Journal》 CSCD 2001年第3期371-378,共8页
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the impl... Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided. 展开更多
关键词 feedforward neural network online gradient method CONVERGENCE BOUNDEDNESS punishing term
在线阅读 下载PDF
A New Class of Nonlinear Conjugate Gradient Methods with Global Convergence Properties 被引量:1
3
作者 CHEN Zhong 《长江大学学报(自科版)(上旬)》 CAS 2014年第3期I0001-I0003,共3页
Nonlinear conjugate gradient methods have played an important role in solving large scale unconstrained optimi-zation problems,it is characterized by the simplicity of their iteration and their low memory requirements... Nonlinear conjugate gradient methods have played an important role in solving large scale unconstrained optimi-zation problems,it is characterized by the simplicity of their iteration and their low memory requirements.It is well-known that the direction generated by a conjugate gradient method may be not a descent direction.In this paper,a new class of nonlinear conjugate gradient method is presented,its search direction is a descent direction for the objective function.If the objective function is differentiable and its gradient is Lipschitz continuous,the line sbarch satisfies strong Wolfe condition,the global convergence result is established. 展开更多
关键词 conjugate gradient method line search global convergence unconstrained optimization
在线阅读 下载PDF
A note on a family of proximal gradient methods for quasi-static incremental problems in elastoplastic analysis
4
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第5期315-320,共6页
Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical ... Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities. 展开更多
关键词 Elastoplastic analysis Incremental problem Nonsmooth convex optimization First-order optimization method Proximal gradient method
在线阅读 下载PDF
GLOBAL CONVERGENCE OF THE GENERAL THREE TERM CONJUGATE GRADIENT METHODS WITH THE RELAXED STRONG WOLFE LINE SEARCH
5
作者 Xu Zeshui Yue ZhenjunInstitute of Sciences,PLA University of Science and Technology,Nanjing,210016. 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2001年第1期58-62,共5页
The global convergence of the general three term conjugate gradient methods with the relaxed strong Wolfe line search is proved.
关键词 Conjugate gradient method inexact line search global convergence.
在线阅读 下载PDF
Global Convergence of Conjugate Gradient Methods without Line Search
6
作者 Cuiling CHEN Yu CHEN 《Journal of Mathematical Research with Applications》 CSCD 2018年第5期541-550,共10页
In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugat... In this paper, a new steplength formula is proposed for unconstrained optimization,which can determine the step-size only by one step and avoids the line search step. Global convergence of the five well-known conjugate gradient methods with this formula is analyzed,and the corresponding results are as follows:(1) The DY method globally converges for a strongly convex LC^1 objective function;(2) The CD method, the FR method, the PRP method and the LS method globally converge for a general, not necessarily convex, LC^1 objective function. 展开更多
关键词 unconstrained optimization conjugate gradient method line search global conver-gence
原文传递
A CLASSOF NONMONOTONE CONJUGATE GRADIENT METHODSFOR NONCONVEX FUNCTIONS
7
作者 LiuYun WeiZengxin 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2002年第2期208-214,共7页
This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Po... This paper discusses the global convergence of a class of nonmonotone conjugate gra- dient methods(NM methods) for nonconvex object functions.This class of methods includes the nonmonotone counterpart of modified Polak- Ribière method and modified Hestenes- Stiefel method as special cases 展开更多
关键词 nonmonotone conjugate gradient method nonmonotone line search global convergence unconstrained optimization.
在线阅读 下载PDF
A New Class of Nonlinear Conjugate Gradient Methods with Global Convergence Propertie
8
作者 CHEN Zhong 《长江大学学报(自科版)(上旬)》 2014年第3期1-4,共4页
Nonlinear conjugate gradient methods have played an important role in solving large scale uncon-srined optimization problems,it is careterired by the simplicity of their iteration and their low memory requirements.It ... Nonlinear conjugate gradient methods have played an important role in solving large scale uncon-srined optimization problems,it is careterired by the simplicity of their iteration and their low memory requirements.It is wll-known that the dreetion generted by a cojugate gradient method may be not a de-scent direction.In this paper,a new class of nonlinear conjugate gradient method is presented,its serch di-rection is a descent direction for the oiective funetion.If the obiective function is dfrentiable and its gradi ent is Lipschitz continuous,the line search satisfies strong Wolle condition,the global convergence result is established. 展开更多
关键词 conjugate gradient method line search global convergence unconstrained optimization
在线阅读 下载PDF
CONVERGENCE ANALYSIS ON A CLASS OF CONJUGATE GRADIENT METHODS WITHOUTSUFFICIENT DECREASE CONDITION 被引量:1
9
作者 刘光辉 韩继业 +1 位作者 戚厚铎 徐中玲 《Acta Mathematica Scientia》 SCIE CSCD 1998年第1期11-16,共6页
Recently, Gilbert and Nocedal([3]) investigated global convergence of conjugate gradient methods related to Polak-Ribiere formular, they restricted beta(k) to non-negative value. [5] discussed the same problem as that... Recently, Gilbert and Nocedal([3]) investigated global convergence of conjugate gradient methods related to Polak-Ribiere formular, they restricted beta(k) to non-negative value. [5] discussed the same problem as that in [3] and relaxed beta(k) to be negative with the objective function being convex. This paper allows beta(k) to be selected in a wider range than [5]. Especially, the global convergence of the corresponding algorithm without sufficient decrease condition is proved. 展开更多
关键词 Polak-Ribiere conjugate gradient method strong Wolfe line search global convergence
全文增补中
A Note on R-Linear Convergence of Nonmonotone Gradient Methods
10
作者 Xin-Rui Li Ya-Kui Huang 《Journal of the Operations Research Society of China》 2025年第1期313-325,共13页
Nonmonotone gradient methods generally perform better than their monotone counterparts especially on unconstrained quadratic optimization.However,the known convergence rate of the monotone method is often much better ... Nonmonotone gradient methods generally perform better than their monotone counterparts especially on unconstrained quadratic optimization.However,the known convergence rate of the monotone method is often much better than its nonmonotone variant.With the aim of shrinking the gap between theory and practice of nonmonotone gradient methods,we introduce a property for convergence analysis of a large collection of gradient methods.We prove that any gradient method using stepsizes satisfying the property will converge R-linearly at a rate of 1-λ_(1)/M_(1),whereλ_(1)is the smallest eigenvalue of Hessian matrix and M_(1)is the upper bound of the inverse stepsize.Our results indicate that the existing convergence rates of many nonmonotone methods can be improved to 1-1/κwithκbeing the associated condition number. 展开更多
关键词 gradient methods R-linear convergence NONMONOTONE Quadratic optimization
原文传递
Cyclic Gradient Methods for Unconstrained Optimization
11
作者 Ya Zhang Cong Sun 《Journal of the Operations Research Society of China》 EI CSCD 2024年第3期809-828,共20页
Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining wit... Gradient method is popular for solving large-scale problems.In this work,the cyclic gradient methods for quadratic function minimization are extended to general smooth unconstrained optimization problems.Combining with nonmonotonic line search,we prove its global convergence.Furthermore,the proposed algorithms have sublinear convergence rate for general convex functions,and R-linear convergence rate for strongly convex problems.Numerical experiments show that the proposed methods are effective compared to the state of the arts. 展开更多
关键词 gradient method Unconstrained optimization Nonmonotonic line search Global convergence
原文传递
A Modified PRP-HS Hybrid Conjugate Gradient Algorithm for Solving Unconstrained Optimization Problems
12
作者 LI Xiangli WANG Zhiling LI Binglan 《应用数学》 北大核心 2025年第2期553-564,共12页
In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradien... In this paper,we propose a three-term conjugate gradient method for solving unconstrained optimization problems based on the Hestenes-Stiefel(HS)conjugate gradient method and Polak-Ribiere-Polyak(PRP)conjugate gradient method.Under the condition of standard Wolfe line search,the proposed search direction is the descent direction.For general nonlinear functions,the method is globally convergent.Finally,numerical results show that the proposed method is efficient. 展开更多
关键词 Conjugate gradient method Unconstrained optimization Sufficient descent condition Global convergence
在线阅读 下载PDF
Achieving detector-grade CdTe(Cl)single crystals through vapor-pressure-controlled vertical gradient freeze growth
13
作者 Zi-Ang Yin Ya-Ru Zhang +7 位作者 Zhe Kang Xiang-Gang Zhang Jin-Bo Liu Ke-Jin Liu Zheng-Yi Sun Wan-Qi Jie Qing-Hua Zhao Tao Wang 《Nuclear Science and Techniques》 2025年第7期213-221,共9页
Cadmium telluride(CdTe),which has a high average atomic number and a unique band structure,is a leading material for room-temperature X/γ-ray detectors.Resistivity and mobility are the two most important properties o... Cadmium telluride(CdTe),which has a high average atomic number and a unique band structure,is a leading material for room-temperature X/γ-ray detectors.Resistivity and mobility are the two most important properties of detector-grade CdTe single crystals.However,despite decades of research,the fabrication of high-resistivity and high-mobility CdTe single crystals faces persistent challenges,primarily because the stoichiometric composition cannot be well controlled owing to the high volatility of Cd under high-temperature conditions.This volatility introduces Te inclusions and cadmium vacancies(V_(Cd))into the as-grown CdTe ingot,which significantly degrades the device performance.In this study,we successfully obtained detector-grade CdTe single crystals by simultaneously employing a Cd reservoir and chlorine(Cl)dopants via a vertical gradient freeze(VGF)method.By installing a Cd reservoir,we can maintain the Cd pressure under the crystal growth conditions,thereby preventing the accumulation of Te in the CdTe ingot.Additionally,the existence of the Cl dopant helps improve the CdTe resistivity by minimizing V_(Cd)density through the formation of an acceptor complex(Cl_(Te)-V_(Cd))^(-1).The crystalline quality of the obtained CdTe(Cl)was evidenced by a reduction in large Te inclusions,high optical transmission(60%),and a sharp absorption edge(1.456 eV).The presence of substitutional Cl dopants,known as Cl_(Te)^(+),simultaneously supports the record high resistivity of 1.5×10^(10)Ω·cm and remarkable electron mobility of 1075±88 cm^(2)V^(-1)s^(-1)simultaneously,has been confirmed by photoluminescence spectroscopy.Moreover,using our crystals,we fabricated a planar detector withμτ_(e)of(1.11±0.04)×10^(-4)cm^(2)∕V,which performed with a decent radiation-detection feature.This study demonstrates that the vapor-pressure-controlled VGF method is a viable technical route for fabricating detector-grade CdTe crystals. 展开更多
关键词 CDTE Semiconductor detector Alpha-detector Vertical gradient freeze method
在线阅读 下载PDF
TESTING DIFFERENT CONJUGATE GRADIENT METHODS FOR LARGE-SCALE UNCONSTRAINED OPTIMIZATION 被引量:10
14
作者 Yu-hongDai QinNi 《Journal of Computational Mathematics》 SCIE CSCD 2003年第3期311-320,共10页
In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and th... In this paper we test different conjugate gradient (CG) methods for solving large-scale unconstrained optimization problems. The methods are divided in two groups: the first group includes five basic CG methods and the second five hybrid CG methods. A collection of medium-scale and large-scale test problems are drawn from a standard code of test problems, CUTE. The conjugate gradient methods are ranked according to the numerical results. Some remarks are given. 展开更多
关键词 Conjugate gradient methods LARGE-SCALE Unconstrained optimization Numerical tests.
原文传递
Effect of nitrogen introduction methods on the microstructure and properties of gradient cemented carbides 被引量:2
15
作者 Tian-en Yang Ji Xiong Lan Sun Zhi-xing Guo Ding Cao 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2011年第6期709-716,共8页
Gradient cemented carbides with the surface depleted in cubic phases were prepared following normal powder metallurgical pro-cedures.Gradient zone formation and the influence of nitrogen introduction methods on the mi... Gradient cemented carbides with the surface depleted in cubic phases were prepared following normal powder metallurgical pro-cedures.Gradient zone formation and the influence of nitrogen introduction methods on the microstructure and performance of the alloys were investigated.The results show that the simple one-step vacuum sintering technique is doable for producing gradient cemented carbides.Gradient structure formation is attributed to the gradient in nitrogen activity during sintering,but is independent from nitrogen introduced methods.A uniform carbon distribution is found throughout the materials.Moreover,the transverse rupture strength of the cemented carbides can be increased by a gradient layer.Different nitrogen carriers give the alloys distinguishing microstructure and mechanical properties,and a gradient alloy with ultrafine-TiC0.5N0.5 is found optimal. 展开更多
关键词 gradient cemented carbide gradient methods nitrogen microstructure mechanical properties sintering
在线阅读 下载PDF
Convergence of On-Line Gradient Methods for Two-Layer Feedforward Neural Networks
16
作者 李正学 吴微 张宏伟 《Journal of Mathematical Research and Exposition》 CSCD 北大核心 2001年第2期12-12,共1页
A discussion is given on the convergence of the on-line gradient methods for two-layer feedforward neural networks in general cases. The theories are applied to some usual activation functions and energy functions.
关键词 on-line gradient method feedforward neural network convergence.
在线阅读 下载PDF
TWO NOVEL GRADIENT METHODS WITH OPTIMAL STEP SIZES
17
作者 Harry Oviedo Oscar Dalmau Rafael Herrera 《Journal of Computational Mathematics》 SCIE CSCD 2021年第3期375-391,共17页
In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in ... In this work we introduce two new Barzilai and Borwein-like steps sizes for the classical gradient method for strictly convex quadratic optimization problems.The proposed step sizes employ second-order information in order to obtain faster gradient-type methods.Both step sizes are derived from two unconstrained optimization models that involve approximate information of the Hessian of the objective function.A convergence analysis of the proposed algorithm is provided.Some numerical experiments are performed in order to compare the efficiency and effectiveness of the proposed methods with similar methods in the literature.Experimentally,it is observed that our proposals accelerate the gradient method at nearly no extra computational cost,which makes our proposal a good alternative to solve large-scale problems. 展开更多
关键词 gradient methods Convex quadratic optimization Hessian spectral properties Steplength selection
原文传递
PRECONDITIONED CONJUGATE GRADIENT METHODS FOR INTEGRAL EQUATIONS OF THE SECOND KIND DEFINED ON THE HALF-LINE
18
作者 Chan, RH Lin, FR 《Journal of Computational Mathematics》 SCIE CSCD 1996年第3期223-236,共14页
We consider solving integral equations of the second kind defined on the half-line [0, infinity) by the preconditioned conjugate gradient method. Convergence is known to be slow due to the non-compactness of the assoc... We consider solving integral equations of the second kind defined on the half-line [0, infinity) by the preconditioned conjugate gradient method. Convergence is known to be slow due to the non-compactness of the associated integral operator. In this paper, we construct two different circulant integral operators to be used as preconditioners for the method to speed up its convergence rate. We prove that if the given integral operator is close to a convolution-type integral operator, then the preconditioned systems will have spectrum clustered around 1 and hence the preconditioned conjugate gradient method will converge superlinearly. Numerical examples are given to illustrate the fast convergence. 展开更多
关键词 MATH Cr PRECONDITIONED CONJUGATE gradient methods FOR INTEGRAL EQUATIONS OF THE SECOND KIND DEFINED ON THE HALF-LINE PRO III
原文传递
THE RESTRICTIVELY PRECONDITIONED CONJUGATE GRADIENT METHODS ON NORMAL RESIDUAL FOR BLOCK TWO-BY-TWO LINEAR SYSTEMS 被引量:4
19
作者 Junfeng Yin Zhongzhi Bai 《Journal of Computational Mathematics》 SCIE EI CSCD 2008年第2期240-249,共10页
The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we... The restrictively preconditioned conjugate gradient (RPCG) method is further developed to solve large sparse system of linear equations of a block two-by-two structure. The basic idea of this new approach is that we apply the RPCG method to the normal-residual equation of the block two-by-two linear system and construct each required approximate matrix by making use of the incomplete orthogonal factorization of the involved matrix blocks. Numerical experiments show that the new method, called the restrictively preconditioned conjugate gradient on normal residual (RPCGNR), is more robust and effective than either the known RPCG method or the standard conjugate gradient on normal residual (CGNR) method when being used for solving the large sparse saddle point problems. 展开更多
关键词 Block two-by-two linear system Saddle point problem Restrictively preconditioned conjugate gradient method Normal-residual equation Incomplete orthogonal factorization
原文传递
A Framework of Convergence Analysis of Mini-batch Stochastic Projected Gradient Methods 被引量:1
20
作者 Jian Gu Xian-Tao Xiao 《Journal of the Operations Research Society of China》 EI CSCD 2023年第2期347-369,共23页
In this paper,we establish a unified framework to study the almost sure global convergence and the expected convergencerates of a class ofmini-batch stochastic(projected)gradient(SG)methods,including two popular types... In this paper,we establish a unified framework to study the almost sure global convergence and the expected convergencerates of a class ofmini-batch stochastic(projected)gradient(SG)methods,including two popular types of SG:stepsize diminished SG and batch size increased SG.We also show that the standard variance uniformly bounded assumption,which is frequently used in the literature to investigate the convergence of SG,is actually not required when the gradient of the objective function is Lipschitz continuous.Finally,we show that our framework can also be used for analyzing the convergence of a mini-batch stochastic extragradient method for stochastic variational inequality. 展开更多
关键词 Stochastic projected gradient method Variance uniformly bounded Convergence analysis
原文传递
上一页 1 2 10 下一页 到第
使用帮助 返回顶部