Abstract
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
Original language | English |
---|---|
Pages (from-to) | 567-599 |
Number of pages | 33 |
Journal | Journal of Machine Learning Research |
Volume | 14 |
Issue number | 1 |
State | Published - Feb 2013 |
Keywords
- Computational complexity
- Logistic regression
- Optimization
- Regularized loss minimization
- Ridge regression
- Stochastic Dual Coordinate Ascent
- Support vector machines
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence
- Control and Systems Engineering
- Statistics and Probability