在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradi...在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradient Descent with Threshold)的RSSI定位算法。算法引入Nesterov思想,不断更新寻优动量,以达到损失函数最小,从而求取对应的未知基站坐标,通过增设阈值,降低了算法陷入局部最优的概率。经仿真比较分析,NAGT方法相对于粒子群算法与随机梯度法,在定位精度与效率上有着较为明显的优势。展开更多
In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated conv...In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated convergence rates.The first type is a variant of the commonly-known consensus-based gradient dynamics,where the consensual terms for determining the actions of each player are discarded to accelerate the learning process.The second type is formulated by introducing the Nesterov's accelerated method into the distributed projected gradient dynamics.The author proves convergence of both algorithms with at least linear rates under the common assumption of Lipschitz continuity and strongly monotonicity.Simulation examples are presented to validate the outperformance of the proposed algorithms over the well-known consensus-based approach and augmented game based approach.It is shown that the required number of iterations to reach the Nash equilibrium is greatly reduced in the proposed algorithms.These results could be helpful to address the issue of long convergence time in partial-information Nash equilibrium seeking algorithms.展开更多
增广拉格朗日乘子法为经典有效的解决线性等式约束凸优化问题的一阶优化方法,算法通过原变量与对偶变量的交替迭代更新收敛至最优点。然而,子问题中原变量的更新在实际应用中往往无法精确求解。本文基于增广拉格朗日乘子法、对偶优化以...增广拉格朗日乘子法为经典有效的解决线性等式约束凸优化问题的一阶优化方法,算法通过原变量与对偶变量的交替迭代更新收敛至最优点。然而,子问题中原变量的更新在实际应用中往往无法精确求解。本文基于增广拉格朗日乘子法、对偶优化以及Nesterov加速技巧,提出一种非精确求解的增广拉格朗日乘子法,利用KKT条件从对偶残差的角度分析并从理论上证明该算法的收敛速率可达到O(1/k2)。The Augmented Lagrangian Method is a classical and effective first-order optimization technique for solving convex optimization problems with linear equality constraints. The algorithm converges to the optimal solution through alternating iterative updates between the primal and dual variables. However, in practical applications, the update of the primal variables in the subproblem is often not solved exactly. In this paper, based on the Augmented Lagrangian Method, dual optimization, and Nesterov’s acceleration technique, we propose an inexact solution version of the Augmented Lagrangian Method. By leveraging the Karush-Kuhn-Tucker (KKT) conditions, we analyze and prove that the convergence rate of the proposed algorithm can achieve a rate of O(1/k2).展开更多
Considering the complexity of plant-wide optimization for large-scale industries, a distributed optimization framework to solve the profit optimization problem in ethylene whole process is proposed. To tackle the dela...Considering the complexity of plant-wide optimization for large-scale industries, a distributed optimization framework to solve the profit optimization problem in ethylene whole process is proposed. To tackle the delays arising from the residence time for materials passing through production units during the process with guaranteed constraint satisfaction, an asynchronous distributed parameter projection algorithm with gradient tracking method is introduced. Besides, the heavy ball momentum and Nesterov momentum are incorporated into the proposed algorithm in order to achieve double acceleration properties. The experimental results show that the proposed asynchronous algorithm can achieve a faster convergence compared with the synchronous algorithm.展开更多
求解约束极小极大问题的隐式梯度(GBAL)算法基本思路是,采用增广拉格朗日方法处理内层优化问题,再利用隐式梯度信息对外部变量进行迭代更新。在此基础上,本文提出了一种求解约束极小极大问题的隐式梯度加速算法,通过引入Nesterov加速梯...求解约束极小极大问题的隐式梯度(GBAL)算法基本思路是,采用增广拉格朗日方法处理内层优化问题,再利用隐式梯度信息对外部变量进行迭代更新。在此基础上,本文提出了一种求解约束极小极大问题的隐式梯度加速算法,通过引入Nesterov加速梯度算法的一个变体算法更新外部变量来提升算法性能。理论分析表明,在内层问题解映射满足Lipschitz连续性且目标函数对外层变量为凸的条件下,所提出的加速算法实现了R-线性收敛速率,通过数值实验验证,加速算法在计算效率和收敛性方面均展现出优越性能。The fundamental approach of the Implicit Gradient-Based (GBAL) algorithm for solving constrained minimax problems involves using the augmented Lagrangian method to address the inner optimization problem, followed by iterative updates of the external variables utilizing implicit gradient information. Building upon this, this paper introduces an accelerated implicit gradient algorithm for solving constrained minimax problems, which enhances the algorithm’s performance by incorporating a variant of the Nesterov accelerated gradient algorithm to update the external variables. Theoretical analysis demonstrates that under the conditions where the solution mapping of the inner problem satisfies Lipschitz continuity and the objective function is convex with respect to the outer variables, the proposed accelerated algorithm achieves an R-linear convergence rate. Numerical experiments confirm that the accelerated algorithm exhibits superior performance in terms of computational efficiency and convergence.展开更多
We introduce a fast solver for the phase field crystal(PFC)and functionalized Cahn-Hilliard(FCH)equations with periodic boundary conditions on a rectangular domain that features the preconditioned Nesterov’s accelera...We introduce a fast solver for the phase field crystal(PFC)and functionalized Cahn-Hilliard(FCH)equations with periodic boundary conditions on a rectangular domain that features the preconditioned Nesterov’s accelerated gradient descent(PAGD)method.We discretize these problems with a Fourier collocation method in space,and employ various second-order schemes in time.We observe a significant speedup with this solver when compared to the preconditioned gradient descent(PGD)method.With the PAGD solver,fully implicit,second-order-in-time schemes are not only feasible to solve the PFC and FCH equations,but also do so more efficiently than some semi-implicit schemes in some cases where accuracy issues are taken into account.Benchmark computations of four different schemes for the PFC and FCH equations are conducted and the results indicate that,for the FCH experiments,the fully implicit schemes(midpoint rule and BDF2 equipped with the PAGD as a nonlinear time marching solver)perform better than their IMEX versions in terms of computational cost needed to achieve a certain precision.For the PFC,the results are not as conclusive as in the FCH experiments,which,we believe,is due to the fact that the nonlinearity in the PFC is milder nature compared to the FCH equation.We also discuss some practical matters in applying the PAGD.We introduce an averaged Newton preconditioner and a sweeping-friction strategy as heuristic ways to choose good preconditioner parameters.The sweeping-friction strategy exhibits almost as good a performance as the case of the best manually tuned parameters.展开更多
深度神经网络具有脆弱性,容易被精心设计的对抗样本攻击.梯度攻击方法在白盒模型上攻击成功率较高,但在黑盒模型上的迁移性较弱.基于Heavy-ball型动量和Nesterov型动量的梯度攻击方法由于在更新方向上考虑了历史梯度信息,提升了对抗样...深度神经网络具有脆弱性,容易被精心设计的对抗样本攻击.梯度攻击方法在白盒模型上攻击成功率较高,但在黑盒模型上的迁移性较弱.基于Heavy-ball型动量和Nesterov型动量的梯度攻击方法由于在更新方向上考虑了历史梯度信息,提升了对抗样本的迁移性.为了进一步使用历史梯度信息,本文针对收敛性更好的Nesterov型动量方法,使用自适应步长策略代替目前广泛使用的固定步长,提出了一种方向和步长均使用历史梯度信息的迭代快速梯度方法(Nesterov and Adaptive-learning-rate based Iterative Fast Gradient Method,NAI-FGM).此外,本文还提出了一种线性变换不变性(Linear-transformation Invariant Method,LIM)的数据增强方法 .实验结果证实了NAI-FGM攻击方法和LIM数据增强策略相对于同类型方法均具有更高的黑盒攻击成功率.组合NAI-FGM方法和LIM策略生成对抗样本,在常规训练模型上的平均黑盒攻击成功率达到87.8%,在对抗训练模型上的平均黑盒攻击成功率达到57.5%,在防御模型上的平均黑盒攻击成功率达到67.2%,均超过现有最高水平.展开更多
文摘在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradient Descent with Threshold)的RSSI定位算法。算法引入Nesterov思想,不断更新寻优动量,以达到损失函数最小,从而求取对应的未知基站坐标,通过增设阈值,降低了算法陷入局部最优的概率。经仿真比较分析,NAGT方法相对于粒子群算法与随机梯度法,在定位精度与效率上有着较为明显的优势。
基金supported by the National Natural Science Foundation of China under Grant No.T2322023Hunan Provincial Natural Science Foundation of China under Grant No.2022JJ20018。
文摘In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated convergence rates.The first type is a variant of the commonly-known consensus-based gradient dynamics,where the consensual terms for determining the actions of each player are discarded to accelerate the learning process.The second type is formulated by introducing the Nesterov's accelerated method into the distributed projected gradient dynamics.The author proves convergence of both algorithms with at least linear rates under the common assumption of Lipschitz continuity and strongly monotonicity.Simulation examples are presented to validate the outperformance of the proposed algorithms over the well-known consensus-based approach and augmented game based approach.It is shown that the required number of iterations to reach the Nash equilibrium is greatly reduced in the proposed algorithms.These results could be helpful to address the issue of long convergence time in partial-information Nash equilibrium seeking algorithms.
文摘增广拉格朗日乘子法为经典有效的解决线性等式约束凸优化问题的一阶优化方法,算法通过原变量与对偶变量的交替迭代更新收敛至最优点。然而,子问题中原变量的更新在实际应用中往往无法精确求解。本文基于增广拉格朗日乘子法、对偶优化以及Nesterov加速技巧,提出一种非精确求解的增广拉格朗日乘子法,利用KKT条件从对偶残差的角度分析并从理论上证明该算法的收敛速率可达到O(1/k2)。The Augmented Lagrangian Method is a classical and effective first-order optimization technique for solving convex optimization problems with linear equality constraints. The algorithm converges to the optimal solution through alternating iterative updates between the primal and dual variables. However, in practical applications, the update of the primal variables in the subproblem is often not solved exactly. In this paper, based on the Augmented Lagrangian Method, dual optimization, and Nesterov’s acceleration technique, we propose an inexact solution version of the Augmented Lagrangian Method. By leveraging the Karush-Kuhn-Tucker (KKT) conditions, we analyze and prove that the convergence rate of the proposed algorithm can achieve a rate of O(1/k2).
基金supported by National Key Research and Development Program of China(2022YFB3305900)National Natural Science Foundation of China(62394343,62394345)+1 种基金Major Science and Technology Projects of Longmen Laboratory(NO.LMZDXM202206)Shanghai Rising-Star Program under Grant 24QA2706100.
文摘Considering the complexity of plant-wide optimization for large-scale industries, a distributed optimization framework to solve the profit optimization problem in ethylene whole process is proposed. To tackle the delays arising from the residence time for materials passing through production units during the process with guaranteed constraint satisfaction, an asynchronous distributed parameter projection algorithm with gradient tracking method is introduced. Besides, the heavy ball momentum and Nesterov momentum are incorporated into the proposed algorithm in order to achieve double acceleration properties. The experimental results show that the proposed asynchronous algorithm can achieve a faster convergence compared with the synchronous algorithm.
文摘求解约束极小极大问题的隐式梯度(GBAL)算法基本思路是,采用增广拉格朗日方法处理内层优化问题,再利用隐式梯度信息对外部变量进行迭代更新。在此基础上,本文提出了一种求解约束极小极大问题的隐式梯度加速算法,通过引入Nesterov加速梯度算法的一个变体算法更新外部变量来提升算法性能。理论分析表明,在内层问题解映射满足Lipschitz连续性且目标函数对外层变量为凸的条件下,所提出的加速算法实现了R-线性收敛速率,通过数值实验验证,加速算法在计算效率和收敛性方面均展现出优越性能。The fundamental approach of the Implicit Gradient-Based (GBAL) algorithm for solving constrained minimax problems involves using the augmented Lagrangian method to address the inner optimization problem, followed by iterative updates of the external variables utilizing implicit gradient information. Building upon this, this paper introduces an accelerated implicit gradient algorithm for solving constrained minimax problems, which enhances the algorithm’s performance by incorporating a variant of the Nesterov accelerated gradient algorithm to update the external variables. Theoretical analysis demonstrates that under the conditions where the solution mapping of the inner problem satisfies Lipschitz continuity and the objective function is convex with respect to the outer variables, the proposed accelerated algorithm achieves an R-linear convergence rate. Numerical experiments confirm that the accelerated algorithm exhibits superior performance in terms of computational efficiency and convergence.
基金NSF grants DMS-1720213,DMS-1719854,and DMS-2012634NSF grants DMS-1720213 and DMS-2111228.The work of S.M.Wise was partially supported by DMS-1719854 and DMS-2012634.
文摘We introduce a fast solver for the phase field crystal(PFC)and functionalized Cahn-Hilliard(FCH)equations with periodic boundary conditions on a rectangular domain that features the preconditioned Nesterov’s accelerated gradient descent(PAGD)method.We discretize these problems with a Fourier collocation method in space,and employ various second-order schemes in time.We observe a significant speedup with this solver when compared to the preconditioned gradient descent(PGD)method.With the PAGD solver,fully implicit,second-order-in-time schemes are not only feasible to solve the PFC and FCH equations,but also do so more efficiently than some semi-implicit schemes in some cases where accuracy issues are taken into account.Benchmark computations of four different schemes for the PFC and FCH equations are conducted and the results indicate that,for the FCH experiments,the fully implicit schemes(midpoint rule and BDF2 equipped with the PAGD as a nonlinear time marching solver)perform better than their IMEX versions in terms of computational cost needed to achieve a certain precision.For the PFC,the results are not as conclusive as in the FCH experiments,which,we believe,is due to the fact that the nonlinearity in the PFC is milder nature compared to the FCH equation.We also discuss some practical matters in applying the PAGD.We introduce an averaged Newton preconditioner and a sweeping-friction strategy as heuristic ways to choose good preconditioner parameters.The sweeping-friction strategy exhibits almost as good a performance as the case of the best manually tuned parameters.
文摘深度神经网络具有脆弱性,容易被精心设计的对抗样本攻击.梯度攻击方法在白盒模型上攻击成功率较高,但在黑盒模型上的迁移性较弱.基于Heavy-ball型动量和Nesterov型动量的梯度攻击方法由于在更新方向上考虑了历史梯度信息,提升了对抗样本的迁移性.为了进一步使用历史梯度信息,本文针对收敛性更好的Nesterov型动量方法,使用自适应步长策略代替目前广泛使用的固定步长,提出了一种方向和步长均使用历史梯度信息的迭代快速梯度方法(Nesterov and Adaptive-learning-rate based Iterative Fast Gradient Method,NAI-FGM).此外,本文还提出了一种线性变换不变性(Linear-transformation Invariant Method,LIM)的数据增强方法 .实验结果证实了NAI-FGM攻击方法和LIM数据增强策略相对于同类型方法均具有更高的黑盒攻击成功率.组合NAI-FGM方法和LIM策略生成对抗样本,在常规训练模型上的平均黑盒攻击成功率达到87.8%,在对抗训练模型上的平均黑盒攻击成功率达到57.5%,在防御模型上的平均黑盒攻击成功率达到67.2%,均超过现有最高水平.