在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradi...在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradient Descent with Threshold)的RSSI定位算法。算法引入Nesterov思想,不断更新寻优动量,以达到损失函数最小,从而求取对应的未知基站坐标,通过增设阈值,降低了算法陷入局部最优的概率。经仿真比较分析,NAGT方法相对于粒子群算法与随机梯度法,在定位精度与效率上有着较为明显的优势。展开更多
In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the soluti...In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc..展开更多
Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we giv...Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.展开更多
In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated conv...In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated convergence rates.The first type is a variant of the commonly-known consensus-based gradient dynamics,where the consensual terms for determining the actions of each player are discarded to accelerate the learning process.The second type is formulated by introducing the Nesterov's accelerated method into the distributed projected gradient dynamics.The author proves convergence of both algorithms with at least linear rates under the common assumption of Lipschitz continuity and strongly monotonicity.Simulation examples are presented to validate the outperformance of the proposed algorithms over the well-known consensus-based approach and augmented game based approach.It is shown that the required number of iterations to reach the Nash equilibrium is greatly reduced in the proposed algorithms.These results could be helpful to address the issue of long convergence time in partial-information Nash equilibrium seeking algorithms.展开更多
文摘在传感器网络定位问题中,利用接收信号强度RSSI(Received Signal Strength Indication)的定位方法存在着接收信号传播不稳定,定位精度较低的问题。为解决该问题,提出了一种基于阈值Nesterov加速梯度下降NAGT(Nesterov Accelerated Gradient Descent with Threshold)的RSSI定位算法。算法引入Nesterov思想,不断更新寻优动量,以达到损失函数最小,从而求取对应的未知基站坐标,通过增设阈值,降低了算法陷入局部最优的概率。经仿真比较分析,NAGT方法相对于粒子群算法与随机梯度法,在定位精度与效率上有着较为明显的优势。
文摘In this exposition paper we present the optimal transport problem of Monge-Ampère-Kantorovitch(MAK in short)and its approximative entropical regularization.Contrary to the MAK optimal transport problem,the solution of the entropical optimal transport problem is always unique,and is characterized by the Schrödinger system.The relationship between the Schrödinger system,the associated Bernstein process and the optimal transport was developed by Léonard[32,33](and by Mikami[39]earlier via an h-process).We present Sinkhorn’s algorithm for solving the Schrödinger system and the recent results on its convergence rate.We study the gradient descent algorithm based on the dual optimal question and prove its exponential convergence,whose rate might be independent of the regularization constant.This exposition is motivated by recent applications of optimal transport to different domains such as machine learning,image processing,econometrics,astrophysics etc..
基金This work was supported by National Natural Science Foun-dation of China(11271136,81530086)Program of Shanghai Subject Chief Scientist(14XD1401600)the 111 Project of China(No.B14019).
文摘Gradient descent(GD)algorithm is the widely used optimisation method in training machine learning and deep learning models.In this paper,based on GD,Polyak’s momentum(PM),and Nesterov accelerated gradient(NAG),we give the convergence of the algorithms from an ini-tial value to the optimal value of an objective function in simple quadratic form.Based on the convergence property of the quadratic function,two sister sequences of NAG’s iteration and par-allel tangent methods in neural networks,the three-step accelerated gradient(TAG)algorithm is proposed,which has three sequences other than two sister sequences.To illustrate the perfor-mance of this algorithm,we compare the proposed algorithm with the three other algorithms in quadratic function,high-dimensional quadratic functions,and nonquadratic function.Then we consider to combine the TAG algorithm to the backpropagation algorithm and the stochastic gradient descent algorithm in deep learning.For conveniently facilitate the proposed algorithms,we rewite the R package‘neuralnet’and extend it to‘supneuralnet’.All kinds of deep learning algorithms in this paper are included in‘supneuralnet’package.Finally,we show our algorithms are superior to other algorithms in four case studies.
基金supported by the National Natural Science Foundation of China under Grant No.T2322023Hunan Provincial Natural Science Foundation of China under Grant No.2022JJ20018。
文摘In this paper,the author is concerned with the problem of achieving Nash equilibrium in noncooperative games over networks.The author proposes two types of distributed projected gradient dynamics with accelerated convergence rates.The first type is a variant of the commonly-known consensus-based gradient dynamics,where the consensual terms for determining the actions of each player are discarded to accelerate the learning process.The second type is formulated by introducing the Nesterov's accelerated method into the distributed projected gradient dynamics.The author proves convergence of both algorithms with at least linear rates under the common assumption of Lipschitz continuity and strongly monotonicity.Simulation examples are presented to validate the outperformance of the proposed algorithms over the well-known consensus-based approach and augmented game based approach.It is shown that the required number of iterations to reach the Nash equilibrium is greatly reduced in the proposed algorithms.These results could be helpful to address the issue of long convergence time in partial-information Nash equilibrium seeking algorithms.
基金Supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project(2018AAA0100902)the National Key Research and Development Program of China(2019YFB1705800)the National Natural Science Foundation of China(61973270)。