期刊文献+
共找到181篇文章
< 1 2 10 >
每页显示 20 50 100
Penalty Function-Based Distributed Primal-Dual Algorithm for Nonconvex Optimization Problem
1
作者 Xiasheng Shi Changyin Sun 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期394-402,共9页
This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and pen... This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and penalty function techniques are introduced to reframe the nonconvex optimization problem. This ensures that the Hessian matrix of the augmented Lagrangian function becomes local positive definite by choosing appropriate control parameters. A multi-timescale primal-dual method is then devised based on the Karush-Kuhn-Tucker(KKT) point of the reformulated nonconvex problem to attain convergence. The Lyapunov theory guarantees the model's stability in the presence of an undirected and connected communication network. Finally, two nonconvex optimization problems are presented to demonstrate the efficacy of the previously developed method. 展开更多
关键词 Constrained optimization Karush-Kuhn-Tucker(KKT)point nonconvex p-power transformation
在线阅读 下载PDF
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:8
2
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
Improved nonconvex optimization model for low-rank matrix recovery 被引量:1
3
作者 李玲芝 邹北骥 朱承璋 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第3期984-991,共8页
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov... Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods. 展开更多
关键词 machine learning computer vision matrix recovery nonconvex optimization
在线阅读 下载PDF
On the Global Convergence of the PERRY-SHANNO Method for Nonconvex Unconstrained Optimization Problems
4
作者 Linghua Huang Qingjun Wu Gonglin Yuan 《Applied Mathematics》 2011年第3期315-320,共6页
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu... In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising. 展开更多
关键词 UNCONSTRAINED optimization nonconvex optimization GLOBAL CONVERGENCE
在线阅读 下载PDF
Distributed optimization for discrete-time multiagent systems with nonconvex control input constraints and switching topologies
5
作者 Xiao-Yu Shen Shuai Su Hai-Liang Hou 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第12期283-290,共8页
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w... This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results. 展开更多
关键词 multiagent systems nonconvex input constraints switching topologies distributed optimization
原文传递
Margin optimization algorithm for digital subscriber lines based on particle swarm optimization 被引量:1
6
作者 Tang Meiqin Guan Xinping 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第6期1316-1323,共8页
The margin maximization problem in digital subscriber line(DSL) systems is investigated.The particle swarm optimization(PSO) theory is applied to the nonconvex margin optimization problem with the target power and... The margin maximization problem in digital subscriber line(DSL) systems is investigated.The particle swarm optimization(PSO) theory is applied to the nonconvex margin optimization problem with the target power and rate constraints.PSO is a new evolution algorithm based on the social behavior of swarms, which can solve discontinuous, nonconvex and nonlinear problems efficiently.The proposed algorithm can converge to the global optimal solution, and numerical example demonstrates that the proposed algorithm can guarantee the fast convergence within a few iterations. 展开更多
关键词 digital subscriber line MARGIN nonconvex particle swarm optimization.
在线阅读 下载PDF
An Effective Algorithm for Quadratic Optimization with Non-Convex Inhomogeneous Quadratic Constraints
7
作者 Kaiyao Lou 《Advances in Pure Mathematics》 2017年第4期314-323,共10页
This paper considers the NP (Non-deterministic Polynomial)-hard problem of finding a minimum value of a quadratic program (QP), subject to m non-convex inhomogeneous quadratic constraints. One effective algorithm is p... This paper considers the NP (Non-deterministic Polynomial)-hard problem of finding a minimum value of a quadratic program (QP), subject to m non-convex inhomogeneous quadratic constraints. One effective algorithm is proposed to get a feasible solution based on the optimal solution of its semidefinite programming (SDP) relaxation problem. 展开更多
关键词 nonconvex INHOMOGENEOUS QUADRATIC Constrained QUADRATIC optimization SEMIDEFINITE Programming RELAXATION NP-HARD
在线阅读 下载PDF
Two Nonmonotone Proximal Gradient Methods for Nonsmooth Optimization over the Stiefel Manifold
8
作者 Jin-chao ZHANG Juan GAO +1 位作者 Ya-kui HUANG Xin-wei LIU 《Acta Mathematicae Applicatae Sinica》 2026年第1期105-120,共16页
We propose two nonmonotone retraction-based proximal gradient methods for solving a class of nonconvex nonsmooth optimization problems over the Stiefel manifold.The proposed methods are equipped with the descent direc... We propose two nonmonotone retraction-based proximal gradient methods for solving a class of nonconvex nonsmooth optimization problems over the Stiefel manifold.The proposed methods are equipped with the descent direction obtained by a proximal mapping restricted in tangent space of the manifold and the BarzilaiBorwein stepsizes determined by two recent iteration points and the corresponding descent directions.By employing,respectively,the Grippo-Lampariello-Lucidi nonmonotone line search strategy and the Dai-Fletcher nonmonotone line search strategy,our proposed methods are proved to be globally convergent.Analysis on the iteration complexity for obtaining an?-stationary solution is provided.Numerical results on the sparse principle component analysis problems demonstrate the efficiency of our methods. 展开更多
关键词 Stiefel manifold nonconvex nonsmooth optimization iteration complexity nonmonotone line search proximal gradient method
原文传递
Periodical sparse-assisted decoupling method for local fault detection of spiral bevel gears
9
作者 Keyuan LI Yanan WANG +2 位作者 Baijie QIAO Zhibin ZHAO Xuefeng CHEN 《Chinese Journal of Aeronautics》 2026年第1期349-369,共21页
Early fault detection for spiral bevel gears is crucial to ensure normal operation and prevent accidents.The harmonic components,excited by the time-varying mesh stiffness,always appear in measured vibration signal.Ho... Early fault detection for spiral bevel gears is crucial to ensure normal operation and prevent accidents.The harmonic components,excited by the time-varying mesh stiffness,always appear in measured vibration signal.How to extract the periodical impulses that indicate gear localized fault buried in the intensive noise and interfered by harmonics is a challenging task.In this paper,a novel Periodical Sparse-Assisted Decoupling(PSAD)method is proposed as an optimization problem to extract fault feature from noisy vibration signal.The PSAD method decouples the impulsive fault feature and harmonic components based on the sparse representation method.The sparsity within and across groups property and the periodicity of the fault feature are incorporated into the regularizer as the prior information.The nonconvex penalty is employed to highlight the sparsity of fault features.Meanwhile,the weight factor based on2norm of each group is constructed to strengthen the amplitude of fault feature.An iterative algorithm with Majorization-Minimization(MM)is derived to solve the optimization problem.Simulation study and experimental analysis confirm the performance of the proposed PSAD method in extracting and enhancing defect impulses from noisy signal.The suggested method surpasses other comparative methods in extracting and enhancing fault features. 展开更多
关键词 Fault detection nonconvex optimization Sparse decoupling Sparsity within and across groups Spiral bevel gear
原文传递
A Hybrid and Inexact Algorithm for Nonconvex and Nonsmooth Optimization
10
作者 WANG Yiyang SONG Xiaoliang 《Journal of Systems Science & Complexity》 2025年第3期1330-1350,共21页
The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithm... The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithms typically employ unified iteration schemes and require explicit solutions to subproblems for ensuring convergence.However,these inflexible iteration schemes overlook task-specific details and may encounter difficulties in providing explicit solutions to subproblems.In contrast,there is evidence suggesting that practical applications can benefit from approximately solving subproblems;however,many existing works fail to establish the theoretical validity of such approximations.In this paper,the authors propose a hybrid inexact proximal alternating method(hiPAM),which addresses a general NNO problem with coupled terms while overcoming all aforementioned challenges.The proposed hiPAM algorithm offers a flexible yet highly efficient approach by seamlessly integrating any efficient methods for approximate subproblem solving that cater to specificities.Additionally,the authors have devised a simple yet implementable stopping criterion that generates a Cauchy sequence and ultimately converges to a critical point of the original NNO problem.The proposed numerical experiments using both simulated and real data have demonstrated that hiPAM represents an exceedingly efficient and robust approach to NNO problems. 展开更多
关键词 Hybrid inexact proximal alternating method inexact minimization criteria machine learning nonconvex and nonsmooth optimization
原文传递
Monotone Splitting SQP Algorithms for Two-block Nonconvex Optimization Problems with General Linear Constraints and Applications
11
作者 Jin-Bao Jian Guo-Dong Ma +1 位作者 Xiao Xu Dao-Lan Han 《Journal of the Operations Research Society of China》 2025年第1期114-141,共28页
This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadrat... This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadratic programming(SQP)and Armijo line search technique,we propose a novel monotone splitting SQP algorithm.First,the discussed problem is transformed into an optimization problem with only linear equality and box constraints by introduction of slack variables.Second,the idea of ADMM is used to decompose the traditional quadratic programming(QP)subproblem.In particular,the QP subproblem corresponding to the introduction of the slack variable is simple,and it has an explicit optimal solution without increasing the computational cost.Third,the search direction is generated by the optimal solutions of the subproblems,and the new iteration point is yielded by an Armijo line search with augmented Lagrange function.Fourth,the multiplier is updated by a novel approach that is different from the ADMM.Furthermore,the algorithm is extended to the associated optimization problem where the box constraints can be replaced by general nonempty closed convex sets.The global convergence of the two proposed algorithms is analyzed under weaker assumptions.Finally,some preliminary numerical experiments and applications in mid-to-large-scale economic dispatch problems for power systems are reported,and these show that the proposed algorithms are promising. 展开更多
关键词 Two-block nonconvex optimization General linear constraints Splitting sequential quadratic programming Alternating direction method of multipliers Global convergence
原文传递
Convergence of Generalized Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints 被引量:5
12
作者 Ke GUO Xin WANG 《Journal of Mathematical Research with Applications》 CSCD 2018年第5期523-540,共18页
In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functio... In this paper, we consider the convergence of the generalized alternating direction method of multipliers(GADMM) for solving linearly constrained nonconvex minimization model whose objective contains coupled functions. Under the assumption that the augmented Lagrangian function satisfies the Kurdyka-Lojasiewicz inequality, we prove that the sequence generated by the GADMM converges to a critical point of the augmented Lagrangian function when the penalty parameter in the augmented Lagrangian function is sufficiently large. Moreover, we also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm. 展开更多
关键词 generalized alternating direction method of multipliers Kurdyka Lojasiewicz in-equality nonconvex optimization
原文传递
OPTIMAL CONTROL OF A POPULATION DYNAMICS MODEL WITH HYSTERESIS 被引量:2
13
作者 Bin CHEN Sergey A.TIMOSHIN 《Acta Mathematica Scientia》 SCIE CSCD 2022年第1期283-298,共16页
This paper addresses a nonlinear partial differential control system arising in population dynamics.The system consist of three diffusion equations describing the evolutions of three biological species:prey,predator,a... This paper addresses a nonlinear partial differential control system arising in population dynamics.The system consist of three diffusion equations describing the evolutions of three biological species:prey,predator,and food for the prey or vegetation.The equation for the food density incorporates a hysteresis operator of generalized stop type accounting for underlying hysteresis effects occurring in the dynamical process.We study the problem of minimization of a given integral cost functional over solutions of the above system.The set-valued mapping defining the control constraint is state-dependent and its values are nonconvex as is the cost integrand as a function of the control variable.Some relaxationtype results for the minimization problem are obtained and the existence of a nearly optimal solution is established. 展开更多
关键词 optimal control problem HYSTERESIS biological diffusion models nonconvex integrands nonconvex control constraints
在线阅读 下载PDF
Global optimality conditions for quadratic 0-1 programming with inequality constraints 被引量:1
14
作者 张连生 陈伟 姚奕荣 《Journal of Shanghai University(English Edition)》 CAS 2010年第2期150-154,共5页
Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are present... Quadratic 0-1 problems with linear inequality constraints are briefly considered in this paper.Global optimality conditions for these problems,including a necessary condition and some sufficient conditions,are presented.The necessary condition is expressed without dual variables.The relations between the global optimal solutions of nonconvex quadratic 0-1 problems and the associated relaxed convex problems are also studied. 展开更多
关键词 quadratic 0-1 programming optimality condition nonconvex optimization integer programming convex duality
在线阅读 下载PDF
Generalized Nonconvex Low-Rank Algorithm for Magnetic Resonance Imaging (MRI) Reconstruction
15
作者 吴新峰 刘且根 +2 位作者 卢红阳 龙承志 王玉皞 《Journal of Donghua University(English Edition)》 EI CAS 2017年第2期316-321,共6页
In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic r... In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values. 展开更多
关键词 magnetic resonance imaging(MRI) low-rank approximation nonconvex optimization alternative direction multiplier method(ADMM)
在线阅读 下载PDF
Convergence of Bregman Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints
16
作者 Xiaotong Zeng Junping Yao Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第2期639-660,共22页
In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie... In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm. 展开更多
关键词 Nonseparable nonconvex optimization Bregman ADMM Kurdyka-Lojasiewicz Inequality
在线阅读 下载PDF
一种解决分布式非凸优化问题的神经动力学算法
17
作者 喻昕 黄庆洲 +1 位作者 林日新 陈铭芸 《广西大学学报(自然科学版)》 北大核心 2025年第5期1073-1087,共15页
为了解决一类带不等式约束的分布式非凸优化问题,提出一种解决分布式非凸优化问题的神经动力学算法。在该问题中,各智能体的局部目标函数之和可以是非凸非光滑的。本文提出的算法具备特殊的通信机制,使得各智能体只与邻居传递特定的相... 为了解决一类带不等式约束的分布式非凸优化问题,提出一种解决分布式非凸优化问题的神经动力学算法。在该问题中,各智能体的局部目标函数之和可以是非凸非光滑的。本文提出的算法具备特殊的通信机制,使得各智能体只与邻居传递特定的相对状态的符号信息,并且在罚参数的调控下,各智能体的状态解在有限时间内进入可行域并实现一致。随后,状态解渐进收敛至原分布式非凸优化问题的临界点集并稳定。仿真结果验证了本文所提出的算法的有效性。最后,算法被应用于解决物理学上的一个斜抛问题。 展开更多
关键词 分布式优化 非凸问题 神经动力学算法 临界点集 有限时间一致
在线阅读 下载PDF
Convergence of ADMM for multi-block nonconvex separable optimization models 被引量:14
18
作者 Ke GUO Deren HAN +1 位作者 David Z. W. WANG Tingting WU 《Frontiers of Mathematics in China》 SCIE CSCD 2017年第5期1139-1162,共24页
For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exh... For solving minimization problems whose objective function is the sum of two functions without coupled variables and the constrained function is linear, the alternating direction method of multipliers (ADMM) has exhibited its efficiency and its convergence is well understood. When either the involved number of separable functions is more than two, or there is a nonconvex function~ ADMM or its direct extended version may not converge. In this paper, we consider the multi-block sepa.rable optimization problems with linear constraints and absence of convexity of the involved component functions. Under the assumption that the associated function satisfies the Kurdyka- Lojasiewicz inequality, we prove that any cluster point of the iterative sequence generated by ADMM is a critical point, under the mild condition that the penalty parameter is sufficiently large. We also present some sufficient conditions guaranteeing the sublinear and linear rate of convergence of the algorithm. 展开更多
关键词 nonconvex optimization separable structure alternating directionmethod of rnultip!iers (.ADMM) Kurdyka-Lojasiewicz inequality
原文传递
LION优化器的收敛速度分析
19
作者 董一鸣 李欢 林宙辰 《计算机学报》 北大核心 2025年第9期2008-2029,共22页
LION(evoLedv sIng mOmeNumt)是Google公司通过启发式程序搜索的方式发现的优化器,是一种独特的基于学习的优化算法。LION算法通过在上步动量和本步梯度之间维持两个不同的插值,并有效结合了解耦的权重衰减技术,实现了超越传统符号梯度... LION(evoLedv sIng mOmeNumt)是Google公司通过启发式程序搜索的方式发现的优化器,是一种独特的基于学习的优化算法。LION算法通过在上步动量和本步梯度之间维持两个不同的插值,并有效结合了解耦的权重衰减技术,实现了超越传统符号梯度下降类算法的性能。LION算法在许多大规模深度学习问题中展现了较强的优势,得到了广泛的应用。然而,尽管已有工作已经证明了LION的收敛性,但尚未有研究给出一个全面的收敛速度分析。已有研究证明,LION能够解决一类特定的盒约束优化问题,本文着重证明了,在?1范数度量下,LION收敛到这类问题的Karush-Kuhn-Tucker(KKT)点的速度为(Q√dK^(-1/4)),其中d为问题维度,K为算法的迭代步数。更进一步,我们移除了约束条件,证明LION在一般无约束问题上以相同的速度收敛至目标函数的驻点。与已有研究工作相比,本文证明的收敛速度达到了关于问题维度d的最优依赖关系;关于迭代步数K,这一速度还达到了非凸优化问题中随机梯度类算法能实现的最优理论下界。此外,这一理论下界以梯度的?2范数度量,而LION所属的符号梯度下降类算法通常度量的是更大的?1范数。由于在不同的梯度范数度量下关于问题维度d得到的收敛速度结果会有所差异,为了验证本文证明的收敛速度关于维度d同样是最优的,我们在多种深度学习任务上设计了全面的实验,不仅证明了LION与同样匹配理论下界的随机梯度下降法相比具有更低的训练损失和更强的性能,而且还验证了LION算法在迭代过程中梯度的ℓ_(1)/ℓ_(2)范数比始终处于Q(√d)的量级,从而在经验上说明了本文证明的收敛速度同样匹配关于d的最优下界。 展开更多
关键词 机器学习 深度学习 非凸优化 收敛速度分析 LION优化器
在线阅读 下载PDF
非凸一致性问题邻近对称ADMM的收敛性分析
20
作者 张静雯 党亚峥 +1 位作者 倪诗皓 乔俊伟 《工程数学学报》 北大核心 2025年第4期721-735,共15页
交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)求解两分块优化的研究已经逐渐完善,但对于非凸多分块优化的研究较少,提出了一种带松弛步长参数的对称邻近ADMM用于求解非凸一致性问题。在适当的假设条件下,证明了... 交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)求解两分块优化的研究已经逐渐完善,但对于非凸多分块优化的研究较少,提出了一种带松弛步长参数的对称邻近ADMM用于求解非凸一致性问题。在适当的假设条件下,证明了算法的全局收敛性。其次,在效益函数满足Kurdyka-Lojasiewicz(KL)性质时,证明了算法的强收敛性。最后,数值实验验证了算法的有效性。 展开更多
关键词 非凸优化 一致性问题 交替方向乘子法 Kurdyka-Lojasiewicz性质 收敛性
在线阅读 下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部