期刊文献+
共找到176篇文章
< 1 2 9 >
每页显示 20 50 100
Monotone Splitting SQP Algorithms for Two-block Nonconvex Optimization Problems with General Linear Constraints and Applications
1
作者 Jin-Bao Jian Guo-Dong Ma +1 位作者 Xiao Xu Dao-Lan Han 《Journal of the Operations Research Society of China》 2025年第1期114-141,共28页
This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadrat... This work discusses a class of two-block nonconvex optimization problems with linear equality,inequality and box constraints.Based on the ideas of alternating direction method with multipliers(ADMM),sequential quadratic programming(SQP)and Armijo line search technique,we propose a novel monotone splitting SQP algorithm.First,the discussed problem is transformed into an optimization problem with only linear equality and box constraints by introduction of slack variables.Second,the idea of ADMM is used to decompose the traditional quadratic programming(QP)subproblem.In particular,the QP subproblem corresponding to the introduction of the slack variable is simple,and it has an explicit optimal solution without increasing the computational cost.Third,the search direction is generated by the optimal solutions of the subproblems,and the new iteration point is yielded by an Armijo line search with augmented Lagrange function.Fourth,the multiplier is updated by a novel approach that is different from the ADMM.Furthermore,the algorithm is extended to the associated optimization problem where the box constraints can be replaced by general nonempty closed convex sets.The global convergence of the two proposed algorithms is analyzed under weaker assumptions.Finally,some preliminary numerical experiments and applications in mid-to-large-scale economic dispatch problems for power systems are reported,and these show that the proposed algorithms are promising. 展开更多
关键词 two-block nonconvex optimization General linear constraints Splitting sequential quadratic programming Alternating direction method of multipliers Global convergence
原文传递
Penalty Function-Based Distributed Primal-Dual Algorithm for Nonconvex Optimization Problem
2
作者 Xiasheng Shi Changyin Sun 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期394-402,共9页
This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and pen... This paper addresses the distributed nonconvex optimization problem, where both the global cost function and local inequality constraint function are nonconvex. To tackle this issue, the p-power transformation and penalty function techniques are introduced to reframe the nonconvex optimization problem. This ensures that the Hessian matrix of the augmented Lagrangian function becomes local positive definite by choosing appropriate control parameters. A multi-timescale primal-dual method is then devised based on the Karush-Kuhn-Tucker(KKT) point of the reformulated nonconvex problem to attain convergence. The Lyapunov theory guarantees the model's stability in the presence of an undirected and connected communication network. Finally, two nonconvex optimization problems are presented to demonstrate the efficacy of the previously developed method. 展开更多
关键词 Constrained optimization Karush-Kuhn-Tucker(KKT)point nonconvex p-power transformation
在线阅读 下载PDF
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:7
3
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 Distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
在线阅读 下载PDF
Improved nonconvex optimization model for low-rank matrix recovery 被引量:1
4
作者 李玲芝 邹北骥 朱承璋 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第3期984-991,共8页
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov... Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods. 展开更多
关键词 machine learning computer vision matrix recovery nonconvex optimization
在线阅读 下载PDF
On the Global Convergence of the PERRY-SHANNO Method for Nonconvex Unconstrained Optimization Problems
5
作者 Linghua Huang Qingjun Wu Gonglin Yuan 《Applied Mathematics》 2011年第3期315-320,共6页
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu... In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising. 展开更多
关键词 UNCONSTRAINED optimization nonconvex optimization GLOBAL CONVERGENCE
在线阅读 下载PDF
A SUPERLINEARLY CONVERGENT SPLITTING FEASIBLE SEQUENTIAL QUADRATIC OPTIMIZATION METHOD FOR TWO-BLOCK LARGE-SCALE SMOOTH OPTIMIZATION
6
作者 简金宝 张晨 刘鹏杰 《Acta Mathematica Scientia》 SCIE CSCD 2023年第1期1-24,共24页
This paper discusses the two-block large-scale nonconvex optimization problem with general linear constraints.Based on the ideas of splitting and sequential quadratic optimization(SQO),a new feasible descent method fo... This paper discusses the two-block large-scale nonconvex optimization problem with general linear constraints.Based on the ideas of splitting and sequential quadratic optimization(SQO),a new feasible descent method for the discussed problem is proposed.First,we consider the problem of quadratic optimal(QO)approximation associated with the current feasible iteration point,and we split the QO into two small-scale QOs which can be solved in parallel.Second,a feasible descent direction for the problem is obtained and a new SQO-type method is proposed,namely,splitting feasible SQO(SF-SQO)method.Moreover,under suitable conditions,we analyse the global convergence,strong convergence and rate of superlinear convergence of the SF-SQO method.Finally,preliminary numerical experiments regarding the economic dispatch of a power system are carried out,and these show that the SF-SQO method is promising. 展开更多
关键词 large scale optimization two-block smooth optimization splitting method feasible sequential quadratic optimization method superlinear convergence
在线阅读 下载PDF
Distributed optimization for discrete-time multiagent systems with nonconvex control input constraints and switching topologies
7
作者 Xiao-Yu Shen Shuai Su Hai-Liang Hou 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第12期283-290,共8页
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w... This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results. 展开更多
关键词 multiagent systems nonconvex input constraints switching topologies distributed optimization
原文传递
Margin optimization algorithm for digital subscriber lines based on particle swarm optimization 被引量:1
8
作者 Tang Meiqin Guan Xinping 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第6期1316-1323,共8页
The margin maximization problem in digital subscriber line(DSL) systems is investigated.The particle swarm optimization(PSO) theory is applied to the nonconvex margin optimization problem with the target power and... The margin maximization problem in digital subscriber line(DSL) systems is investigated.The particle swarm optimization(PSO) theory is applied to the nonconvex margin optimization problem with the target power and rate constraints.PSO is a new evolution algorithm based on the social behavior of swarms, which can solve discontinuous, nonconvex and nonlinear problems efficiently.The proposed algorithm can converge to the global optimal solution, and numerical example demonstrates that the proposed algorithm can guarantee the fast convergence within a few iterations. 展开更多
关键词 digital subscriber line MARGIN nonconvex particle swarm optimization.
在线阅读 下载PDF
A Hybrid and Inexact Algorithm for Nonconvex and Nonsmooth Optimization
9
作者 WANG Yiyang SONG Xiaoliang 《Journal of Systems Science & Complexity》 2025年第3期1330-1350,共21页
The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithm... The problem of nonconvex and nonsmooth optimization(NNO)has been extensively studied in the machine learning community,leading to the development of numerous fast and convergent numerical algorithms.Existing algorithms typically employ unified iteration schemes and require explicit solutions to subproblems for ensuring convergence.However,these inflexible iteration schemes overlook task-specific details and may encounter difficulties in providing explicit solutions to subproblems.In contrast,there is evidence suggesting that practical applications can benefit from approximately solving subproblems;however,many existing works fail to establish the theoretical validity of such approximations.In this paper,the authors propose a hybrid inexact proximal alternating method(hiPAM),which addresses a general NNO problem with coupled terms while overcoming all aforementioned challenges.The proposed hiPAM algorithm offers a flexible yet highly efficient approach by seamlessly integrating any efficient methods for approximate subproblem solving that cater to specificities.Additionally,the authors have devised a simple yet implementable stopping criterion that generates a Cauchy sequence and ultimately converges to a critical point of the original NNO problem.The proposed numerical experiments using both simulated and real data have demonstrated that hiPAM represents an exceedingly efficient and robust approach to NNO problems. 展开更多
关键词 Hybrid inexact proximal alternating method inexact minimization criteria machine learning nonconvex and nonsmooth optimization
原文传递
An Effective Algorithm for Quadratic Optimization with Non-Convex Inhomogeneous Quadratic Constraints
10
作者 Kaiyao Lou 《Advances in Pure Mathematics》 2017年第4期314-323,共10页
This paper considers the NP (Non-deterministic Polynomial)-hard problem of finding a minimum value of a quadratic program (QP), subject to m non-convex inhomogeneous quadratic constraints. One effective algorithm is p... This paper considers the NP (Non-deterministic Polynomial)-hard problem of finding a minimum value of a quadratic program (QP), subject to m non-convex inhomogeneous quadratic constraints. One effective algorithm is proposed to get a feasible solution based on the optimal solution of its semidefinite programming (SDP) relaxation problem. 展开更多
关键词 nonconvex INHOMOGENEOUS QUADRATIC Constrained QUADRATIC optimization SEMIDEFINITE Programming RELAXATION NP-HARD
在线阅读 下载PDF
Convergence of Bregman Alternating Direction Method of Multipliers for Nonseparable Nonconvex Objective with Linear Constraints
11
作者 Xiaotong Zeng Junping Yao Haoming Xia 《Journal of Applied Mathematics and Physics》 2024年第2期639-660,共22页
In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie... In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm. 展开更多
关键词 Nonseparable nonconvex optimization Bregman ADMM Kurdyka-Lojasiewicz Inequality
在线阅读 下载PDF
一种解决分布式非凸优化问题的神经动力学算法
12
作者 喻昕 黄庆洲 +1 位作者 林日新 陈铭芸 《广西大学学报(自然科学版)》 北大核心 2025年第5期1073-1087,共15页
为了解决一类带不等式约束的分布式非凸优化问题,提出一种解决分布式非凸优化问题的神经动力学算法。在该问题中,各智能体的局部目标函数之和可以是非凸非光滑的。本文提出的算法具备特殊的通信机制,使得各智能体只与邻居传递特定的相... 为了解决一类带不等式约束的分布式非凸优化问题,提出一种解决分布式非凸优化问题的神经动力学算法。在该问题中,各智能体的局部目标函数之和可以是非凸非光滑的。本文提出的算法具备特殊的通信机制,使得各智能体只与邻居传递特定的相对状态的符号信息,并且在罚参数的调控下,各智能体的状态解在有限时间内进入可行域并实现一致。随后,状态解渐进收敛至原分布式非凸优化问题的临界点集并稳定。仿真结果验证了本文所提出的算法的有效性。最后,算法被应用于解决物理学上的一个斜抛问题。 展开更多
关键词 分布式优化 非凸问题 神经动力学算法 临界点集 有限时间一致
在线阅读 下载PDF
LION优化器的收敛速度分析
13
作者 董一鸣 李欢 林宙辰 《计算机学报》 北大核心 2025年第9期2008-2029,共22页
LION(evoLedv sIng mOmeNumt)是Google公司通过启发式程序搜索的方式发现的优化器,是一种独特的基于学习的优化算法。LION算法通过在上步动量和本步梯度之间维持两个不同的插值,并有效结合了解耦的权重衰减技术,实现了超越传统符号梯度... LION(evoLedv sIng mOmeNumt)是Google公司通过启发式程序搜索的方式发现的优化器,是一种独特的基于学习的优化算法。LION算法通过在上步动量和本步梯度之间维持两个不同的插值,并有效结合了解耦的权重衰减技术,实现了超越传统符号梯度下降类算法的性能。LION算法在许多大规模深度学习问题中展现了较强的优势,得到了广泛的应用。然而,尽管已有工作已经证明了LION的收敛性,但尚未有研究给出一个全面的收敛速度分析。已有研究证明,LION能够解决一类特定的盒约束优化问题,本文着重证明了,在?1范数度量下,LION收敛到这类问题的Karush-Kuhn-Tucker(KKT)点的速度为(Q√dK^(-1/4)),其中d为问题维度,K为算法的迭代步数。更进一步,我们移除了约束条件,证明LION在一般无约束问题上以相同的速度收敛至目标函数的驻点。与已有研究工作相比,本文证明的收敛速度达到了关于问题维度d的最优依赖关系;关于迭代步数K,这一速度还达到了非凸优化问题中随机梯度类算法能实现的最优理论下界。此外,这一理论下界以梯度的?2范数度量,而LION所属的符号梯度下降类算法通常度量的是更大的?1范数。由于在不同的梯度范数度量下关于问题维度d得到的收敛速度结果会有所差异,为了验证本文证明的收敛速度关于维度d同样是最优的,我们在多种深度学习任务上设计了全面的实验,不仅证明了LION与同样匹配理论下界的随机梯度下降法相比具有更低的训练损失和更强的性能,而且还验证了LION算法在迭代过程中梯度的ℓ_(1)/ℓ_(2)范数比始终处于Q(√d)的量级,从而在经验上说明了本文证明的收敛速度同样匹配关于d的最优下界。 展开更多
关键词 机器学习 深度学习 非凸优化 收敛速度分析 LION优化器
在线阅读 下载PDF
非凸一致性问题邻近对称ADMM的收敛性分析
14
作者 张静雯 党亚峥 +1 位作者 倪诗皓 乔俊伟 《工程数学学报》 北大核心 2025年第4期721-735,共15页
交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)求解两分块优化的研究已经逐渐完善,但对于非凸多分块优化的研究较少,提出了一种带松弛步长参数的对称邻近ADMM用于求解非凸一致性问题。在适当的假设条件下,证明了... 交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)求解两分块优化的研究已经逐渐完善,但对于非凸多分块优化的研究较少,提出了一种带松弛步长参数的对称邻近ADMM用于求解非凸一致性问题。在适当的假设条件下,证明了算法的全局收敛性。其次,在效益函数满足Kurdyka-Lojasiewicz(KL)性质时,证明了算法的强收敛性。最后,数值实验验证了算法的有效性。 展开更多
关键词 非凸优化 一致性问题 交替方向乘子法 Kurdyka-Lojasiewicz性质 收敛性
在线阅读 下载PDF
非凸多目标优化问题有效解集的非空性与有界性的渐近刻画
15
作者 刘应 傅小恒 唐莉萍 《应用数学和力学》 北大核心 2025年第4期519-527,共9页
优化问题解集的非空性和有界性在数值算法研究中发挥着重要作用.该文利用渐近分析工具,在正则性条件下研究了非凸多目标优化问题有效解集的非空性和有界性.首先,在正则条件下,建立了非凸多目标优化问题的有效解集和真有效解集的内外渐... 优化问题解集的非空性和有界性在数值算法研究中发挥着重要作用.该文利用渐近分析工具,在正则性条件下研究了非凸多目标优化问题有效解集的非空性和有界性.首先,在正则条件下,建立了非凸多目标优化问题的有效解集和真有效解集的内外渐近估计;然后,根据这些估计,获得了非凸多目标优化问题有效解集的非空有界性的渐近刻画;最后,给出了非凸多目标优化问题有效解存在的必要条件. 展开更多
关键词 非凸多目标优化问题 有效解 正则性 渐近锥 渐近函数
在线阅读 下载PDF
非凸复合优化问题的黄金比率邻近交替线性化算法
16
作者 曾康 龙宪军 《运筹学学报(中英文)》 北大核心 2025年第2期80-94,共15页
本文考虑一类完全非凸的复合优化问题,其目标函数由如下两部分组成:关于全局变量不可分的连续可微非凸函数,与两个关于独立变量的正常下半连续非凸函数。本文提出一种求解该问题的新型黄金比率邻近交替线性化极小化算法。在Kurdyka-Loja... 本文考虑一类完全非凸的复合优化问题,其目标函数由如下两部分组成:关于全局变量不可分的连续可微非凸函数,与两个关于独立变量的正常下半连续非凸函数。本文提出一种求解该问题的新型黄金比率邻近交替线性化极小化算法。在Kurdyka-Lojasiewicz(简记KL)性质假设下,证明了由算法产生的迭代序列收敛到问题的稳定点。最后将新算法应用于求解稀疏信号恢复问题,数值实验验证了新算法的有效性与优越性。 展开更多
关键词 非凸复合优化问题 黄金比率邻近交替线性化算法 KL性质 收敛性
在线阅读 下载PDF
非凸多目标优化问题的凸上逼近方法
17
作者 霍紫燕 唐莉萍 《重庆师范大学学报(自然科学版)》 北大核心 2025年第2期68-77,共10页
提出一种求解非凸多目标优化问题的凸上逼近方法。首先,通过ε-约束法将多目标优化问题转化为单目标优化问题;其次,利用一类凸上估计函数对非凸约束函数进行逼近,构造一系列凸松弛子问题,设计了序列参数凸逼近算法;然后,在适当的条件下... 提出一种求解非凸多目标优化问题的凸上逼近方法。首先,通过ε-约束法将多目标优化问题转化为单目标优化问题;其次,利用一类凸上估计函数对非凸约束函数进行逼近,构造一系列凸松弛子问题,设计了序列参数凸逼近算法;然后,在适当的条件下,证明算法产生的迭代序列收敛到原多目标优化问题的KKT点;最后,通过数值实验来验证算法的可行性。 展开更多
关键词 非凸多目标优化 凸上逼近方法 凸上估计函数 KKT点
原文传递
A splicing algorithm for best subset selection in sliced inverse regression
18
作者 Borui Tang Jin Zhu +1 位作者 Tingyin Wang Junxian Zhu 《中国科学技术大学学报》 北大核心 2025年第5期22-34,21,I0001,共15页
In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by re... In this study,we examine the problem of sliced inverse regression(SIR),a widely used method for sufficient dimension reduction(SDR).It was designed to find reduced-dimensional versions of multivariate predictors by replacing them with a minimally adequate collection of their linear combinations without loss of information.Recently,regularization methods have been proposed in SIR to incorporate a sparse structure of predictors for better interpretability.However,existing methods consider convex relaxation to bypass the sparsity constraint,which may not lead to the best subset,and particularly tends to include irrelevant variables when predictors are correlated.In this study,we approach sparse SIR as a nonconvex optimization problem and directly tackle the sparsity constraint by establishing the optimal conditions and iteratively solving them by means of the splicing technique.Without employing convex relaxation on the sparsity constraint and the orthogonal constraint,our algorithm exhibits superior empirical merits,as evidenced by extensive numerical studies.Computationally,our algorithm is much faster than the relaxed approach for the natural sparse SIR estimator.Statistically,our algorithm surpasses existing methods in terms of accuracy for central subspace estimation and best subset selection and sustains high performance even with correlated predictors. 展开更多
关键词 splicing technique best subset selection sliced inverse regression nonconvex optimization sparsity constraint optimal conditions
在线阅读 下载PDF
A Half-Proximal Symmetric Splitting Method for Non-Convex Separable Optimization
19
作者 Pengjie Liu Jinbao Jian +2 位作者 Hu Shao Xiaoquan Wang Xiangfeng Wang 《Acta Mathematica Sinica,English Series》 2025年第8期2160-2194,共35页
In this paper,we explore the convergence and convergence rate results for a new methodology termed the half-proximal symmetric splitting method(HPSSM).This method is designed to address linearly constrained two-block ... In this paper,we explore the convergence and convergence rate results for a new methodology termed the half-proximal symmetric splitting method(HPSSM).This method is designed to address linearly constrained two-block non-convex separable optimization problem.It integrates a half-proximal term within its first subproblem to cancel out complicated terms in applications where the subproblem is not easy to solve or lacks a simple closed-form solution.To further enhance adaptability in selecting relaxation factor thresholds during the two Lagrange multiplier update steps,we strategically incorporate a relaxation factor as a disturbance parameter within the iterative process of the second subproblem.Building on several foundational assumptions,we establish the subsequential convergence,global convergence,and iteration complexity of HPSSM.Assuming the presence of the Kurdyka-Łojasiewicz inequality of Łojasiewicz-type within the augmented Lagrangian function(ALF),we derive the convergence rates for both the ALF sequence and the iterative sequence.To substantiate the effectiveness of HPSSM,sufficient numerical experiments are conducted.Moreover,expanding upon the two-block iterative scheme,we present the theoretical results for the symmetric splitting method when applied to a three-block case. 展开更多
关键词 nonconvex separable optimization half-proximal splitting method Kurdyka-Łojasiewicz property convergence and rate analyses
原文传递
Convex and Nonconvex Optimization Based on Neurodynamic Method with Zero-Sum Initial Constraint
20
作者 Yiyang Ge Zhanshan Wang Bibo Zheng 《The International Journal of Intelligent Control and Systems》 2024年第4期184-194,共11页
A neurodynamic method(NdM)for convex optimization is proposed in this paper with an equality constraint.The method utilizes a neurodynamic system(NdS)that converges to the optimal solution of a convex optimization pro... A neurodynamic method(NdM)for convex optimization is proposed in this paper with an equality constraint.The method utilizes a neurodynamic system(NdS)that converges to the optimal solution of a convex optimization problem in a fixed time.Due to its mathematical simplicity,it can also be combined with reinforcement learning(RL)to solve a class of nonconvex optimization problems.To maintain the mathematical simplicity of NdS,zero-sum initial constraints are introduced to reduce the number of auxiliary multipliers.First,the initial sum of the state variables must satisfy the equality constraint.Second,the sum of their derivatives is designed to remain zero.In order to apply the proposed convex optimization algorithm to nonconvex optimization with mixed constraints,the virtual actions in RL are redefined to avoid the use of NdS inequality constrained multipliers.The proposed NdM plays an effective search tool in constrained nonconvex optimization algorithms.Numerical examples demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Neurodynamic method(NdM) zero-sum initial constraint distribued optimization convex and nonconvex optimization reinforcement learning(RL)
在线阅读 下载PDF
上一页 1 2 9 下一页 到第
使用帮助 返回顶部