期刊文献+
共找到32篇文章
< 1 2 >
每页显示 20 50 100
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:7
1
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
Improved nonconvex optimization model for low-rank matrix recovery 被引量:1
2
作者 李玲芝 邹北骥 朱承璋 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第3期984-991,共8页
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov... Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods. 展开更多
关键词 machine learning computer vision matrix recovery nonconvex optimization
在线阅读 下载PDF
Monotone Splitting SQP Algorithms for Two-block Nonconvex Optimization Problems with General Linear Constraints and Applications
3
作者 Jin-Bao Jian Guo-Dong Ma +1 位作者 Xiao Xu Dao-Lan Han 《Journal of the Operations Research Society of China》 2025年第1期114-141,共28页
This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadrat... This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadratic programming(SQP)and Armijo line search technique,we propose a novel monotone splitting SQP algorithm.First,the discussed problem is transformed into an optimization problem with only linear equality and box constraints by introduction of slack variables.Second,the idea of ADMM is used to decompose the traditional quadratic programming(QP)subproblem.In particular,the QP subproblem corresponding to the introduction of the slack variable is simple,and it has an explicit optimal solution without increasing the computational cost.Third,the search direction is generated by the optimal solutions of the subproblems,and the new iteration point is yielded by an Armijo line search with augmented Lagrange function.Fourth,the multiplier is updated by a novel approach that is different from the ADMM.Furthermore,the algorithm is extended to the associated optimization problem where the box constraints can be replaced by general nonempty closed convex sets.The global convergence of the two proposed algorithms is analyzed under weaker assumptions.Finally,some preliminary numerical experiments and applications in mid-to-large-scale economic dispatch problems for power systems are reported,and these show that the proposed algorithms are promising. 展开更多
关键词 Two-block nonconvex optimization General linear constraints Splitting sequential quadratic programming Alternating direction method of multipliers Global convergence
原文传递
Convex and Nonconvex Optimization Based on Neurodynamic Method with Zero-Sum Initial Constraint
4
作者 Yiyang Ge Zhanshan Wang Bibo Zheng 《The International Journal of Intelligent Control and Systems》 2024年第4期184-194,共11页
A neurodynamic method(NdM)for convex optimization is proposed in this paper with an equality constraint.The method utilizes a neurodynamic system(NdS)that converges to the optimal solution of a convex optimization pro... A neurodynamic method(NdM)for convex optimization is proposed in this paper with an equality constraint.The method utilizes a neurodynamic system(NdS)that converges to the optimal solution of a convex optimization problem in a fixed time.Due to its mathematical simplicity,it can also be combined with reinforcement learning(RL)to solve a class of nonconvex optimization problems.To maintain the mathematical simplicity of NdS,zero-sum initial constraints are introduced to reduce the number of auxiliary multipliers.First,the initial sum of the state variables must satisfy the equality constraint.Second,the sum of their derivatives is designed to remain zero.In order to apply the proposed convex optimization algorithm to nonconvex optimization with mixed constraints,the virtual actions in RL are redefined to avoid the use of NdS inequality constrained multipliers.The proposed NdM plays an effective search tool in constrained nonconvex optimization algorithms.Numerical examples demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Neurodynamic method(NdM) zero-sum initial constraint distribued optimization convex and nonconvex optimization reinforcement learning(RL)
在线阅读 下载PDF
Adaptive sparse and dense hybrid representation with nonconvex optimization
5
作者 Xuejun WANG Feilong CAO Wenjian WANG 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第4期65-78,共14页
Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there... Sparse representation has been widely used in signal processing,pattern recognition and computer vision etc.Excellent achievements have been made in both theoretical researches and practical applications.However,there are two limitations on the application of classification.One is that sufficient training samples are required for each class,and the other is that samples should be uncorrupted.In order to alleviate above problems,a sparse and dense hybrid representation(SDR)framework has been proposed,where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary.SDR putsℓ1 constraint on the coefficients of class-specific dictionary.Nevertheless,it over-emphasizes the sparsity and overlooks the correlation information in class-specific dictionary,which may lead to poor classification results.To overcome this disadvantage,an adaptive sparse and dense hybrid representation with non-convex optimization(ASDR-NO)is proposed in this paper.The trace norm is adopted in class-specific dictionary,which is different from general approaches.By doing so,the dictionary structure becomes adaptive and the representation ability of the dictionary will be improved.Meanwhile,a non-convex surrogate is used to approximate the rank function in dictionary decomposition in order to avoid a suboptimal solution of the original rank minimization,which can be solved by iteratively reweighted nuclear norm(IRNN)algorithm.Extensive experiments conducted on benchmark data sets have verified the effectiveness and advancement of the proposed algorithm compared with the state-of-the-art sparse representation methods. 展开更多
关键词 sparse representation trace norm nonconvex optimization low rank matrix recovery iteratively reweighted nuclear norm
原文传递
Convergence of Bregman Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints
6
作者 Xiaotong Zeng Junping Yao Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第2期639-660,共22页
In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie... In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm. 展开更多
关键词 Nonseparable nonconvex optimization Bregman ADMM Kurdyka-Lojasiewicz Inequality
在线阅读 下载PDF
A splicing algorithm for best subset selection in sliced inverse regression
7
作者 Borui Tang Jin Zhu +1 位作者 Tingyin Wang Junxian Zhu 《中国科学技术大学学报》 北大核心 2025年第5期22-34,21,I0001,共15页
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re... In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors. 展开更多
关键词 splicing technique best subset selection sliced inverse regression nonconvex optimization sparsity constraint optimal conditions
在线阅读 下载PDF
Convergence of Generalized Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints 被引量:5
8
作者 Ke GUO Xin WANG 《Journal of Mathematical Research with Applications》 CSCD 2018年第5期523-540,共18页
In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functio... In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functions. Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality, we prove that the sequence generated by the GADMM converges to a critical point of the augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large. Moreover, we also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm. 展开更多
关键词 generalized alternating direction method of multipliers Kurdyka Lojasiewicz in-equality nonconvex optimization
原文传递
Generalized Nonconvex Low-Rank Algorithm for Magnetic Resonance Imaging (MRI) Reconstruction
9
作者 吴新峰 刘且根 +2 位作者 卢红阳 龙承志 王玉皞 《Journal of Donghua University(English Edition)》 EI CAS 2017年第2期316-321,共6页
In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic r... In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values. 展开更多
关键词 magnetic resonance imaging(MRI) low-rank approximation nonconvex optimization alternative direction multiplier method(ADMM)
在线阅读 下载PDF
A Hybrid and Inexact Algorithm for Nonconvex and Nonsmooth Optimization
10
作者 WANG Yiyang SONG Xiaoliang 《Journal of Systems Science & Complexity》 2025年第3期1330-1350,共21页
The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithm... The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithms typically employ unified iteration schemes and require explicit solutions to subproblems for ensuring convergence.However,these inflexible iteration schemes overlook task-specific details and may encounter difficulties in providing explicit solutions to subproblems.In contrast,there is evidence suggesting that practical applications can benefit from approximately solving subproblems;however,many existing works fail to establish the theoretical validity of such approximations.In this paper,the authors propose a hybrid inexact proximal alternating method(hiPAM),which addresses a general NNO problem with coupled terms while overcoming all aforementioned challenges.The proposed hiPAM algorithm offers a flexible yet highly efficient approach by seamlessly integrating any efficient methods for approximate subproblem solving that cater to specificities.Additionally,the authors have devised a simple yet implementable stopping criterion that generates a Cauchy sequence and ultimately converges to a critical point of the original NNO problem.The proposed numerical experiments using both simulated and real data have demonstrated that hiPAM represents an exceedingly efficient and robust approach to NNO problems. 展开更多
关键词 Hybrid inexact proximal alternating method inexact minimization criteria machine learning nonconvex and nonsmooth optimization
原文传递
A Symmetric Bregman Alternating Direction Method of Multipliers for Separable Nonconvex Minimization Problem
11
作者 Zhangyuan Zeng Shilian Zhao 《Journal of Applied Mathematics and Physics》 2025年第3期889-913,共25页
The alternating direction method of multipliers(ADMM)and its symmetric version are efficient for minimizing two-block separable problems with linear constraints.However,both ADMM and symmetric ADMM have limited versat... The alternating direction method of multipliers(ADMM)and its symmetric version are efficient for minimizing two-block separable problems with linear constraints.However,both ADMM and symmetric ADMM have limited versatility across various fields due to the requirement that the gradients of differentiable functions exhibit global Lipschitz continuity,a condition that is typically challenging to satisfy in nonconvex optimization problems.Recently,a novel Bregman ADMM that not only eliminates the need for global Lipschitz continuity of the gradient,but also ensures that Bregman ADMM can be degenerated to the classical ADMM has been proposed for two-block nonconvex optimization problems with linear constraints.Building on this,we propose a symmetric Bregman alternating direction method of multipliers,which can be degenerated into the symmetric ADMM and the Bregman ADMM,and thus further degenerated into the classical ADMM.Moreover,when solving separable nonconvex optimization problems,it does not require the global Lipschitz continuity of the gradients of differentiable functions.Furthermore,we demonstrate that under the Kurdyka-Lojasiewicz inequality and certain conditions,the iterative sequence generated by our algorithm converges to a critical point of the problem.In addition,we examine the convergence rate of the algorithm. 展开更多
关键词 Symmetric ADMM Bregman ADMM nonconvex optimization SEPARABLE Kurdyka-Lojasiewicz Inequality
在线阅读 下载PDF
Nonconvex and discriminative transfer subspace learning for unsupervised domain adaptation
12
作者 Yueying LIU Tingjin LUO 《Frontiers of Computer Science》 2025年第2期43-57,共15页
Unsupervised transfer subspace learning is one of the challenging and important topics in domain adaptation,which aims to classify unlabeled target data by using source domain information.The traditional transfer subs... Unsupervised transfer subspace learning is one of the challenging and important topics in domain adaptation,which aims to classify unlabeled target data by using source domain information.The traditional transfer subspace learning methods often impose low-rank constraints,i.e.,trace norm,to preserve data structural information of different domains.However,trace norm is only the convex surrogate to approximate the ideal low-rank constraints and may make their solutions seriously deviate from the original optimums.In addition,the traditional methods directly use the strict labels of source domain,which is difficult to deal with label noise.To solve these problems,we propose a novel nonconvex and discriminative transfer subspace learning method named NDTSL by incorporating Schatten-norm and soft label matrix.Specifically,Schatten-norm can be imposed to approximate the low-rank constraints and obtain a better lowrank representation.Then,we design and adopt soft label matrix in source domain to learn a more flexible classifier and enhance the discriminative ability of target data.Besides,due to the nonconvexity of Schatten-norm,we design an efficient alternative algorithm IALM to solve it.Finally,experimental results on several public transfer tasks demonstrate the effectiveness of NDTSL compared with several state-of-the-art methods. 展开更多
关键词 transfer subspace learning unsupervised domain adaptation low-rank modeling nonconvex optimization
原文传递
Convergence of Generalized Bregman Alternating Direction Method of Multipliers for Nonconvex Objective with Linear Constraints
13
作者 Junping Yao Mei Lu 《Journal of Applied Mathematics and Physics》 2025年第4期1138-1162,共25页
In this paper,we investigate the convergence of the generalized Bregman alternating direction method of multipliers(ADMM)for solving nonconvex separable problems with linear constraints.This algorithm relaxes the requ... In this paper,we investigate the convergence of the generalized Bregman alternating direction method of multipliers(ADMM)for solving nonconvex separable problems with linear constraints.This algorithm relaxes the requirement of global Lipschitz continuity of differentiable functions that is often seen in many researches,and it incorporates the acceleration technique of the proximal point algorithm(PPA).As a result,the scope of application of the algorithm is broadened and its performance is enhanced.Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality,we demonstrate that the iterative sequence generated by the algorithm converges to a critical point of its augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large.Finally,we analyze the convergence rate of the algorithm. 展开更多
关键词 Generalized Bregman Alternating Direction Method of Multipliers nonconvex optimization Lipschitz-Like Convexity Condition Kurdyka-Lojasiewicz Inequality
在线阅读 下载PDF
A Half-Proximal Symmetric Splitting Method for Non-Convex Separable Optimization
14
作者 Pengjie Liu Jinbao Jian +2 位作者 Hu Shao Xiaoquan Wang Xiangfeng Wang 《Acta Mathematica Sinica,English Series》 2025年第8期2160-2194,共35页
In this paper,we explore the convergence and convergence rate results for a new methodology termed the half-proximal symmetric splitting method(HPSSM).This method is designed to address linearly constrained two-block ... In this paper,we explore the convergence and convergence rate results for a new methodology termed the half-proximal symmetric splitting method(HPSSM).This method is designed to address linearly constrained two-block non-convex separable optimization problem.It integrates a half-proximal term within its first subproblem to cancel out complicated terms in applications where the subproblem is not easy to solve or lacks a simple closed-form solution.To further enhance adaptability in selecting relaxation factor thresholds during the two Lagrange multiplier update steps,we strategically incorporate a relaxation factor as a disturbance parameter within the iterative process of the second subproblem.Building on several foundational assumptions,we establish the subsequential convergence,global convergence,and iteration complexity of HPSSM.Assuming the presence of the Kurdyka-Łojasiewicz inequality of Łojasiewicz-type within the augmented Lagrangian function(ALF),we derive the convergence rates for both the ALF sequence and the iterative sequence.To substantiate the effectiveness of HPSSM,sufficient numerical experiments are conducted.Moreover,expanding upon the two-block iterative scheme,we present the theoretical results for the symmetric splitting method when applied to a three-block case. 展开更多
关键词 nonconvex separable optimization half-proximal splitting method Kurdyka-Łojasiewicz property convergence and rate analyses
原文传递
Convergence of ADMM for multi-block nonconvex separable optimization models 被引量:14
15
作者 Ke GUO Deren HAN +1 位作者 David Z. W. WANG Tingting WU 《Frontiers of Mathematics in China》 SCIE CSCD 2017年第5期1139-1162,共24页
For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exh... For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exhibited its efficiency and its convergence is well understood. When either the involved number of separable functions is more than two, or there is a nonconvex function~ ADMM or its direct extended version may not converge. In this paper, we consider the multi-block sepa.rable optimization problems with linear constraints and absence of convexity of the involved component functions. Under the assumption that the associated function satisfies the Kurdyka- Lojasiewicz inequality, we prove that any cluster point of the iterative sequence generated by ADMM is a critical point, under the mild condition that the penalty parameter is sufficiently large. We also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm. 展开更多
关键词 nonconvex optimization separable structure alternating directionmethod of rnultip!iers (.ADMM) Kurdyka-Lojasiewicz inequality
原文传递
Online blind source separation based on joint diagonalization 被引量:2
16
作者 Li Ronghua Zhou Guoxu Yang Zuyuan Xie Shengli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第2期229-233,共5页
A new algorithm is proposed for joint diagonalization. With a modified objective function, the new algorithm not only excludes trivial and unbalanced solutions successfully, but is also easily optimized. In addition, ... A new algorithm is proposed for joint diagonalization. With a modified objective function, the new algorithm not only excludes trivial and unbalanced solutions successfully, but is also easily optimized. In addition, with the new objective function, the proposed algorithm can work well in online blind source separation (BSS) for the first time, although this family of algorithms is always thought to be valid only in batch-mode BSS by far. Simulations show that it is a very competitive joint diagonalization algorithm. 展开更多
关键词 blind source separation joint diagonalization nonconvex optimization
在线阅读 下载PDF
Global optimality conditions for quadratic 0-1 programming with inequality constraints 被引量:1
17
作者 张连生 陈伟 姚奕荣 《Journal of Shanghai University(English Edition)》 CAS 2010年第2期150-154,共5页
Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are present... Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are presented.The necessary condition is expressed without dual variables.The relations between the global optimal solutions of nonconvex quadratic 0-1 problems and the associated relaxed convex problems are also studied. 展开更多
关键词 quadratic 0-1 programming optimality condition nonconvex optimization integer programming convex duality
在线阅读 下载PDF
A Bregman-style Partially Symmetric Alternating Direction Method of Multipliers for Nonconvex Multi-block Optimization
18
作者 Peng-jie LIU Jin-bao JIAN Guo-dong MA 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2023年第2期354-380,共27页
The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an u... The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective. 展开更多
关键词 nonconvex optimization multi-block optimization alternating direction method with multipliers Kurdyka-Lojasiewicz property convergence rate
原文传递
一类非凸优化问题的邻近拟牛顿方法的复杂性
19
作者 金玲子 《Chinese Quarterly Journal of Mathematics》 2023年第1期62-84,共23页
This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former func... This paper studies a class of nonconvex composite optimization, whose objective is a summation of an average of nonconvex(weakly) smooth functions and a convex nonsmooth function, where the gradient of the former function has the H o¨lder continuity. By exploring the structure of such kind of problems, we first propose a proximal(quasi-)Newton algorithm wPQN(Proximal quasi-Newton algorithm for weakly smooth optimization) and investigate its theoretical complexities to find an approximate solution. Then we propose a stochastic variant algorithm wPSQN(Proximal stochastic quasi-Newton algorithm for weakly smooth optimization), which allows a random subset of component functions to be used at each iteration. Moreover, motivated by recent success of variance reduction techniques, we propose two variance reduced algorithms,wPSQN-SVRG and wPSQN-SARAH, and investigate their computational complexity separately. 展开更多
关键词 nonconvex optimization Nonsmooth optimization Holder continuity Proximal quasi-Newton Variance reduction
在线阅读 下载PDF
L1/2 -Regularized Quantile Method for Sparse Phase Retrieval
20
作者 Si Shen Jiayao Xiang +1 位作者 Huijuan Lv Ailing Yan 《Open Journal of Applied Sciences》 CAS 2022年第12期2135-2151,共17页
The sparse phase retrieval aims to recover the sparse signal from quadratic measurements. However, the measurements are often affected by outliers and asymmetric distribution noise. This paper introduces a novel metho... The sparse phase retrieval aims to recover the sparse signal from quadratic measurements. However, the measurements are often affected by outliers and asymmetric distribution noise. This paper introduces a novel method that combines the quantile regression and the L<sub>1/2</sub>-regularizer. It is a non-convex, non-smooth, non-Lipschitz optimization problem. We propose an efficient algorithm based on the Alternating Direction Methods of Multiplier (ADMM) to solve the corresponding optimization problem. Numerous numerical experiments show that this method can recover sparse signals with fewer measurements and is robust to dense bounded noise and Laplace noise. 展开更多
关键词 Sparse Phase Retrieval nonconvex optimization Alternating Direction Method of Multipliers Quantile Regression Model ROBUSTNESS
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部