Abstract
We introduce the notion of predicted decrease approximation (PDA) for constrained convex optimization, a flexible framework which includes as special cases known algorithms such as generalized conditional gradient, proximal gradient, greedy coordinate descent for separable constraints and working set methods for linear equality constraints with bounds. The new scheme allows the development of a unified convergence analysis for these methods. We further consider a partially strongly convex nonsmooth model and show that dual application of PDA-based methods yields new sublinear convergence rate estimates in terms of both primal and dual objectives. As an example of an application, we provide an explicit working set selection rule for SMO-type methods for training the support vector machine with an improved primal convergence analysis.
| Original language | English |
|---|---|
| Pages (from-to) | 37-73 |
| Number of pages | 37 |
| Journal | Mathematical Programming |
| Volume | 167 |
| Issue number | 1 |
| DOIs | |
| State | Published - 1 Jan 2018 |
Keywords
- Approximate linear oracles
- Conditional gradient algorithm
- Primal–dual methods
- Working set methods
All Science Journal Classification (ASJC) codes
- Software
- General Mathematics