A stochastic conditional gradient algorithm for decentralized online convex optimization

Journal of Parallel and Distributed Computing(2022)

引用 2|浏览19
暂无评分
摘要
We study in this paper several problems at the intersection of decentralized optimization and online learning. Decentralized optimization plays a vital role in machine learning and has recently garnered much attention due to its inherent advantage in handling edge computations. Many decentralized optimization algorithms, both projection and projection-free algorithms with theoretical guarantees, have been proposed in the literature, focusing mainly on offline settings. However, for most real-world machine learning problems, the data is often revealed online, for example, in the case of recommender systems, image/video processing, and stock portfolio management. Therefore, in this work, we study decentralized optimization within the framework of online settings with constraints imposed on the optimization solutions (e.g., sparsity or low rank of matrices). More specifically, we consider the problem of optimizing an aggregate of convex loss functions that arrive over time such that their components are distributed over a connected network. We present a consensus-based online decentralized Frank-Wolfe algorithm that uses stochastic gradient estimates, which achieves an asymptotically tight regret guarantee of O(T) where T is a given time horizon. Furthermore, we demonstrate the performance of this algorithm for optimizing the online multiclass logistic regression model on real-world standard image datasets (MNIST, CIFAR10) by comparing with centralized online algorithms. We achieve better regret bounds than the previously best-known decentralized constrained online algorithms.
更多
查看译文
关键词
Online learning,Distributed learning,Edge computing,Machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要