We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a ...We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algorithm.We show numerical simulations of these algorithms on various benchmarks.展开更多
In this paper, we give a local Hamilton's gradient estimate for a nonlinear parabol- ic equation on Riemannian manifolds. As its application, a Harnack-type inequality and a Liouville-type theorem are obtained.
In this paper, we study gradient estimates for the nonlinear heat equation ut-△u = au log u, on compact Riemannian manifold with or without boundary. We get a Hamilton type gradient estimate for the positive smooth s...In this paper, we study gradient estimates for the nonlinear heat equation ut-△u = au log u, on compact Riemannian manifold with or without boundary. We get a Hamilton type gradient estimate for the positive smooth solution to the equation on close manifold, and obtain a Li-Yau type gradient estimate for the positive smooth solution to the equation on compact manifold with nonconvex boundary.展开更多
Dear Editor,SO(3)SO(3)This letter proposes a continuous-time semi-decentralized algorithm to minimize a sum of local cost functions on over a multi-agent network.Inspired by the distributed subgradient method in[1],th...Dear Editor,SO(3)SO(3)This letter proposes a continuous-time semi-decentralized algorithm to minimize a sum of local cost functions on over a multi-agent network.Inspired by the distributed subgradient method in[1],the algorithm combines a consensus protocol on with a local Riemannian gradient term,but the state of each agent evolves on the nonlinear manifold.In absence of global information for each node,a coordinator is introduced in the communication network to ensure that all agents achieve convergence with consensus.Resorting to Lyapunov approaches,it is shown that the proposed algorithm reaches an optimal solution.展开更多
基金partially supported by NSF Grants DMS-1854434,DMS-1952644,and DMS-2151235 at UC Irvinesupported by NSF Grants DMS-1924935,DMS-1952339,DMS-2110145,DMS-2152762,and DMS-2208361,and DOE Grants DE-SC0021142 and DE-SC0002722.
文摘We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algorithm.We show numerical simulations of these algorithms on various benchmarks.
基金Supported by the National Natural Science Foundation of China(Grant Nos.1126103811271132)the Jiangxi Normal University Youth Development Fund
文摘In this paper, we give a local Hamilton's gradient estimate for a nonlinear parabol- ic equation on Riemannian manifolds. As its application, a Harnack-type inequality and a Liouville-type theorem are obtained.
基金Supported by the National Natural Science Foundation of China (Grant Nos. 10871069 11261038)Shanghai Leading Academic Discipline Project (Grant No. B407)
文摘In this paper, we study gradient estimates for the nonlinear heat equation ut-△u = au log u, on compact Riemannian manifold with or without boundary. We get a Hamilton type gradient estimate for the positive smooth solution to the equation on close manifold, and obtain a Li-Yau type gradient estimate for the positive smooth solution to the equation on compact manifold with nonconvex boundary.
基金supported by the National Key Research and Development Program of China(2022YFA1004701)the National Natural Science Foundation of China(72271187,62373283)Shanghai Municipal Science and Technology Major(2021SHZDZX0100).
文摘Dear Editor,SO(3)SO(3)This letter proposes a continuous-time semi-decentralized algorithm to minimize a sum of local cost functions on over a multi-agent network.Inspired by the distributed subgradient method in[1],the algorithm combines a consensus protocol on with a local Riemannian gradient term,but the state of each agent evolves on the nonlinear manifold.In absence of global information for each node,a coordinator is introduced in the communication network to ensure that all agents achieve convergence with consensus.Resorting to Lyapunov approaches,it is shown that the proposed algorithm reaches an optimal solution.