In this paper,we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints.Based on neighbor communication and stocha...In this paper,we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints.Based on neighbor communication and stochastic gradient,a distributed stochastic mirror descent algorithm is designed for the distributed resource allocation problem.Sublinear convergence to an optimal solution of the proposed algorithm is given when the second moments of the gradient noises are summable.A numerical example is also given to illustrate the effectiveness of the proposed algorithm.展开更多
Mirror descent,which can be seen as generalization of gradient descent for solving constrained optimization problem,has found a variety applications in many fields.As growing demand of solving high-dimensional constra...Mirror descent,which can be seen as generalization of gradient descent for solving constrained optimization problem,has found a variety applications in many fields.As growing demand of solving high-dimensional constrained optimization problem,accelerated form of mirror descent has been proposed,along with its corresponding low-resolution ordinary differential equations(ODEs)framework has been studied.However,The low-resolution ODEs are unable to distinguish between Polyak’s heavy-ball method and Nesterov’s accelerated gradient method.This problem also arises with the low-resolution ODEs for accelerated mirror descent.To address this issue,we derive the high-resolution ODEs for accelerated mirror descent and propose a general Lyapunov function framework to analyze its convergence rates in both continuous time and discrete time.Furthermore,we demonstrate that the accelerated mirror descent can minimize the squared gradient norm at an inverse cubic rate by using the high-resolution ODEs framework.In the end,we extend the high-resolution ODEs framework for the accelerated mirror descent method to analyze the accelerated higher-order mirror descent and obtain finer convergence results.展开更多
Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla s...Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla stochastic gradient descent method.However,for most algorithms,convergence rate in stochastic setting cannot simply match that in deterministic setting.Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper.Specifically,we are interested in Nesterov acceleration of gradient-based approaches.In our study,we focus on acceleration of stochastic mirror descent method with implicit regularization property.Assuming that the problem objective is smooth and convex or strongly convex,our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.展开更多
Adaptive graph neural networks(AGNNs)have achieved remarkable success in industrial process soft sensing by incorporating explicit features that delineate the relationships between process variables.This article intro...Adaptive graph neural networks(AGNNs)have achieved remarkable success in industrial process soft sensing by incorporating explicit features that delineate the relationships between process variables.This article introduces a novel GNN framework,termed entropy-regularized ensemble adaptive graph(E^(2)AG),aimed at enhancing the predictive accuracy of AGNNs.Specifically,this work pioneers a novel AGNN learning approach based on mirror descent,which is central to ensuring the efficiency of the training procedure and consequently guarantees that the learned graph naturally adheres to the row-normalization requirement intrinsic to the message-passing of GNNs.Subsequently,motivated by multi-head self-attention mechanism,the training of ensembled AGNNs is rigorously examined within this framework,incorporating an entropy regularization term in the learning objective to ensure the diversity of the learned graph.After that,the architecture and training algorithm of the model are then concisely summarized.Finally,to ascertain the efficacy of the proposed E^(2)AG model,extensive experiments are conducted on real-world industrial datasets.The evaluation focuses on prediction accuracy,model efficacy,and sensitivity analysis,demonstrating the superiority of E^(2)AG in industrial soft sensing applications.展开更多
基金the National Key Research and Development Program of China(No.2016YFB0901900)the National Natural Science Foundation of China(No.61733018)the China Special Postdoctoral Science Foundation Funded Project(No.Y990075G21).
文摘In this paper,we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints.Based on neighbor communication and stochastic gradient,a distributed stochastic mirror descent algorithm is designed for the distributed resource allocation problem.Sublinear convergence to an optimal solution of the proposed algorithm is given when the second moments of the gradient noises are summable.A numerical example is also given to illustrate the effectiveness of the proposed algorithm.
基金partially supported by the National Natural Science Foundation of China(No.12288201).
文摘Mirror descent,which can be seen as generalization of gradient descent for solving constrained optimization problem,has found a variety applications in many fields.As growing demand of solving high-dimensional constrained optimization problem,accelerated form of mirror descent has been proposed,along with its corresponding low-resolution ordinary differential equations(ODEs)framework has been studied.However,The low-resolution ODEs are unable to distinguish between Polyak’s heavy-ball method and Nesterov’s accelerated gradient method.This problem also arises with the low-resolution ODEs for accelerated mirror descent.To address this issue,we derive the high-resolution ODEs for accelerated mirror descent and propose a general Lyapunov function framework to analyze its convergence rates in both continuous time and discrete time.Furthermore,we demonstrate that the accelerated mirror descent can minimize the squared gradient norm at an inverse cubic rate by using the high-resolution ODEs framework.In the end,we extend the high-resolution ODEs framework for the accelerated mirror descent method to analyze the accelerated higher-order mirror descent and obtain finer convergence results.
文摘Driven by large-scale optimization problems arising from machine learning,the development of stochastic optimization methods has witnessed a huge growth.Numerous types of methods have been developed based on vanilla stochastic gradient descent method.However,for most algorithms,convergence rate in stochastic setting cannot simply match that in deterministic setting.Better understanding the gap between deterministic and stochastic optimization is the main goal of this paper.Specifically,we are interested in Nesterov acceleration of gradient-based approaches.In our study,we focus on acceleration of stochastic mirror descent method with implicit regularization property.Assuming that the problem objective is smooth and convex or strongly convex,our analysis prescribes the method parameters which ensure fast convergence of the estimation error and satisfied numerical performance.
基金supported in part by the National Natural Science Foundation of China(NSFC)(62473103,62203169,62473121)the Postdoctoral Science Foundation of Zhejiang Province(ZJ2023011).
文摘Adaptive graph neural networks(AGNNs)have achieved remarkable success in industrial process soft sensing by incorporating explicit features that delineate the relationships between process variables.This article introduces a novel GNN framework,termed entropy-regularized ensemble adaptive graph(E^(2)AG),aimed at enhancing the predictive accuracy of AGNNs.Specifically,this work pioneers a novel AGNN learning approach based on mirror descent,which is central to ensuring the efficiency of the training procedure and consequently guarantees that the learned graph naturally adheres to the row-normalization requirement intrinsic to the message-passing of GNNs.Subsequently,motivated by multi-head self-attention mechanism,the training of ensembled AGNNs is rigorously examined within this framework,incorporating an entropy regularization term in the learning objective to ensure the diversity of the learned graph.After that,the architecture and training algorithm of the model are then concisely summarized.Finally,to ascertain the efficacy of the proposed E^(2)AG model,extensive experiments are conducted on real-world industrial datasets.The evaluation focuses on prediction accuracy,model efficacy,and sensitivity analysis,demonstrating the superiority of E^(2)AG in industrial soft sensing applications.
基金National High Technology Research and Development Programof ChinaInnovation Fund of the Laboratory of Laser Fusion and Research Center of Laser Fusion(20090604)