Abstract
We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where M machines work in parallel over the course of R rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute K stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.
| Original language | English |
|---|---|
| Pages (from-to) | 4386-4437 |
| Number of pages | 52 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 134 |
| State | Published - 2021 |
| Event | 34th Conference on Learning Theory, COLT 2021 - Boulder, United States Duration: 15 Aug 2021 → 19 Aug 2021 |
Keywords
- Distributed Stochastic Convex Optimization
- Oracle Complexity of Optimization
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability