Stochastic Dual Coordinate Ascent methods for regularized loss minimization

Research output: Contribution to journalArticlepeer-review

Abstract

Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.

Original languageEnglish
Pages (from-to)567-599
Number of pages33
JournalJournal of Machine Learning Research
Volume14
Issue number1
StatePublished - Feb 2013

Keywords

  • Computational complexity
  • Logistic regression
  • Optimization
  • Regularized loss minimization
  • Ridge regression
  • Stochastic Dual Coordinate Ascent
  • Support vector machines

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Stochastic Dual Coordinate Ascent methods for regularized loss minimization'. Together they form a unique fingerprint.

Cite this