期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Distributed gradient-free and projection-free algorithm for stochastic constrained optimization
1
作者 Jie Hou Xianlin Zeng Chen Chen 《Autonomous Intelligent Systems》 2024年第1期320-337,共18页
Distributed stochastic zeroth-order optimization(DSZO),in which the objective function is allocated over multiple agents and the derivative of cost functions is unavailable,arises frequently in large-scale machine lea... Distributed stochastic zeroth-order optimization(DSZO),in which the objective function is allocated over multiple agents and the derivative of cost functions is unavailable,arises frequently in large-scale machine learning and reinforcement learning.This paper introduces a distributed stochastic algorithm for DSZO in a projection-free and gradient-free manner via the Frank-Wolfe framework and the stochastic zeroth-order oracle(SZO).Such a scheme is particularly useful in large-scale constrained optimization problems where calculating gradients or projection operators is impractical,costly,or when the objective function is not differentiable everywhere.Specifically,the proposed algorithm,enhanced by recursive momentum and gradient tracking techniques,guarantees convergence with just a single batch per iteration.This significant improvement over existing algorithms substantially lowers the computational complexity.Under mild conditions,we prove that the complexity bounds on SZO of the proposed 1 algorithm are O(n/ϵ^(2))andO(n(21/ϵ))for convex and nonconvex cases,respectively.The efficacy of the algorithm is verified on black-box binary classification problems against several competing alternatives. 展开更多
关键词 Zeroth-order optimization projection-free method Stochastic constrained optimization Distributed optimization
原文传递
Gradient-free distributed online optimization in networks
2
作者 Yuhang Liu Wenxiao Zhao +2 位作者 Nan Zhang Dongdong Lv Shuai Zhang 《Control Theory and Technology》 2025年第2期207-220,共14页
In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss ... In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated.Moreover,we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency.By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms,we design two kinds of gradient-free distributed online optimization algorithms without projection step,which can economize considerable computational resources as well as has less limitations on the applicability.We prove that both of two algorithms achieves consensus of the estimates and regrets of\(O\left(\log(T)\right)\)for local strongly convex objective,respectively.Finally,a simulation example is provided to verify the theoretical results. 展开更多
关键词 Distributed optimization Online convex optimization Gradient-free algorithm projection-free algorithm
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部