Abstract
Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of Nesterov [2007].
| Original language | English |
|---|---|
| Journal | Advances in Neural Information Processing Systems |
| State | Published - 2013 |
| Event | 27th Annual Conference on Neural Information Processing Systems, NIPS 2013 - Lake Tahoe, NV, United States Duration: 5 Dec 2013 → 10 Dec 2013 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Information Systems
- Signal Processing