期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
A note on a family of proximal gradient methods for quasi-static incremental problems in elastoplastic analysis
1
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2020年第5期315-320,共6页
Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical ... Accelerated proximal gradient methods have recently been developed for solving quasi-static incremental problems of elastoplastic analysis with some different yield criteria.It has been demonstrated through numerical experiments that these methods can outperform conventional optimization-based approaches in computational plasticity.However,in literature these algorithms are described individually for specific yield criteria,and hence there exists no guide for application of the algorithms to other yield criteria.This short paper presents a general form of algorithm design,independent of specific forms of yield criteria,that unifies the existing proximal gradient methods.Clear interpretation is also given to each step of the presented general algorithm so that each update rule is linked to the underlying physical laws in terms of mechanical quantities. 展开更多
关键词 Elastoplastic analysis Incremental problem Nonsmooth convex optimization First-order optimization method proximal gradient method
在线阅读 下载PDF
AN SQP-TYPE PROXIMAL GRADIENT METHOD FOR COMPOSITE OPTIMIZATION PROBLEMS WITH EQUALITY CONSTRAINTS
2
作者 Pinzheng Wei Weihong Yang 《Journal of Computational Mathematics》 2025年第4期1016-1044,共29页
In this paper,we present an SQP-type proximal gradient method(SQP-PG)for composite optimization problems with equality constraints.At each iteration,SQP-PG solves a subproblem to get the search direction,and takes an ... In this paper,we present an SQP-type proximal gradient method(SQP-PG)for composite optimization problems with equality constraints.At each iteration,SQP-PG solves a subproblem to get the search direction,and takes an exact penalty function as the merit function to determine if the trial step is accepted.The global convergence of the SQP-PG method is proved and the iteration complexity for obtaining an-stationary point is analyzed.We also establish the local linear convergence result of the SQP-PG method under the second-order sufficient condition.Numerical results demonstrate that,compared to the state-of-the-art algorithms,SQP-PG is an effective method for equality constrained composite optimization problems. 展开更多
关键词 Composite optimization proximal gradient method SQP method Semismooth Newton method
原文传递
On the Linear Convergence of a Proximal Gradient Method for a Class of Nonsmooth Convex Minimization Problems 被引量:4
3
作者 Haibin Zhang Jiaojiao Jiang Zhi-Quan Luo 《Journal of the Operations Research Society of China》 EI 2013年第2期163-186,共24页
We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm a... We consider a class of nonsmooth convex optimization problems where the objective function is the composition of a strongly convex differentiable function with a linear mapping,regularized by the sum of both l1-norm and l2-norm of the optimization variables.This class of problems arise naturally from applications in sparse group Lasso,which is a popular technique for variable selection.An effective approach to solve such problems is by the Proximal Gradient Method(PGM).In this paper we prove a local error bound around the optimal solution set for this problem and use it to establish the linear convergence of the PGM method without assuming strong convexity of the overall objective function. 展开更多
关键词 proximal gradient method Error bound Linear convergence Sparse group I asso
原文传递
PAPR Reduction in Massive MU-MIMO-OFDM Systems Using the Proximal Gradient Method 被引量:1
4
作者 Davinder Singh R.K.Sarin 《Journal of Communications and Information Networks》 CSCD 2019年第1期88-94,共7页
In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction a... In this paper,we address the issue of peak-to-average power ratio(PAPR)reduction in large-scale multiuser multiple-input multiple-output(MU-MIMO)orthogonal frequency-division multiplexing(OFDM)systems.PAPR reduction and the multiuser interference(MUI)cancellation problem are jointly formulated as an l_(∞)-norm based composite convex optimization problem,which can be solved efficiently using the iterative proximal gradient method.The proximal operator associated with l_(∞)-norm is evaluated using a low-cost sorting algorithm.The proposed method adaptively chooses the step size to accelerate convergence.Simulation results reveal that the proximal gradient method converges swiftly while provid-ing considerable PAPR reduction and lower out-of-band radiation. 展开更多
关键词 OFDM MU-MIMO PAPR reduction proximal operator proximal gradient method
原文传递
A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems
5
作者 Ying-Yi Li Hai-Bin Zhang Fei Li 《Journal of the Operations Research Society of China》 EI CSCD 2017年第3期391-403,共13页
In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed m... In this paper,we propose a modified proximal gradient method for solving a class of nonsmooth convex optimization problems,which arise in many contemporary statistical and signal processing applications.The proposed method adopts a new scheme to construct the descent direction based on the proximal gradient method.It is proven that the modified proximal gradient method is Q-linearly convergent without the assumption of the strong convexity of the objective function.Some numerical experiments have been conducted to evaluate the proposed method eventually. 展开更多
关键词 Nonsmooth convex optimization Modified proximal gradient method Q-linear convergence
原文传递
HYBRID REGULARIZED CONE-BEAM RECONSTRUCTION FOR AXIALLY SYMMETRIC OBJECT TOMOGRAPHY
6
作者 Xinge LI Suhua WEI +1 位作者 Haibo XU Chong CHEN 《Acta Mathematica Scientia》 SCIE CSCD 2022年第1期403-419,共17页
In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiograp... In this paper,we consider 3 D tomographic reconstruction for axially symmetric objects from a single radiograph formed by cone-beam X-rays.All contemporary density reconstruction methods in high-energy X-ray radiography are based on the assumption that the cone beam can be treated as fan beams located at parallel planes perpendicular to the symmetric axis,so that the density of the whole object can be recovered layer by layer.Considering the relationship between different layers,we undertake the cone-beam global reconstruction to solve the ambiguity effect at the material interfaces of the reconstruction results.In view of the anisotropy of classical discrete total variations,a new discretization of total variation which yields sharp edges and has better isotropy is introduced in our reconstruction model.Furthermore,considering that the object density consists of continually changing parts and jumps,a high-order regularization term is introduced.The final hybrid regularization model is solved using the alternating proximal gradient method,which was recently applied in image processing.Density reconstruction results are presented for simulated radiographs,which shows that the proposed method has led to an improvement in terms of the preservation of edge location. 展开更多
关键词 high-energy X-ray radiography cone-beam global reconstruction inverse problem total variation alternating proximal gradient method
在线阅读 下载PDF
Convergence Rate Analysis of Modified BiG-SAM for Solving Bi-Level Optimization Problems Based on S-FISTA
7
作者 Nishi Xiaoyin Lin Yang 《Journal of Applied Mathematics and Physics》 2025年第4期1555-1576,共22页
In this paper,we consider a more general bi-level optimization problem,where the inner objective function is consisted of three convex functions,involving a smooth and two non-smooth functions.The outer objective func... In this paper,we consider a more general bi-level optimization problem,where the inner objective function is consisted of three convex functions,involving a smooth and two non-smooth functions.The outer objective function is a classical strongly convex function which may not be smooth.Motivated by the smoothing approaches,we modify the classical bi-level gradient sequential averaging method to solve the bi-level optimization problem.Under some mild conditions,we obtain the convergence rate of the generated sequence,and then based on the analysis framework of S-FISTA,we show the global convergence rate of the proposed algorithm. 展开更多
关键词 Bi-Level Optimization Convex Problems First-Order methods proximal gradient method Sequential Averaging method Moreau Envelope
在线阅读 下载PDF
On Globally Q-Linear Convergence of a Splitting Method for Group Lasso
8
作者 Yun-Da Dong Hai-Bin Zhang Huan Gao 《Journal of the Operations Research Society of China》 EI CSCD 2018年第3期445-454,共10页
In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linea... In this paper,we discuss a splitting method for group Lasso.By assuming that the sequence of the step lengths has positive lower bound and positive upper bound(unrelated to the given problem data),we prove its Q-linear rate of convergence of the distance sequence of the iterates to the solution set.Moreover,we make comparisons with convergence of the proximal gradient method analyzed very recently. 展开更多
关键词 Group Lasso Splitting method proximal gradient method Q-linear rate of convergence
原文传递
First-Order Algorithms for Convex Optimization with Nonseparable Objective and Coupled Constraints 被引量:8
9
作者 Xiang Gao Shu-Zhong Zhang 《Journal of the Operations Research Society of China》 EI CSCD 2017年第2期131-159,共29页
In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propos... In this paper,we consider a block-structured convex optimization model,where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint.For the 2-block case,we propose a number of first-order algorithms to solve this model.First,the alternating direction method of multipliers(ADMM)is extended,assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block.We prove that O(1/t)iteration complexity bound holds under suitable conditions,where t is the number of iterations.If the subroutines of the ADMM cannot be implemented,then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers,alternating gradient projection method of multipliers,and the hybrids thereof.Under suitable conditions,the O(1/t)iteration complexity bound is shown to hold for all the newly proposed algorithms.Finally,we extend the analysis for the ADMM to the general multi-block case. 展开更多
关键词 First-order algorithms ADMM proximal gradient method Convex optimization Iteration complexity
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部