Abstract
We consider a nonconvex optimization problem consisting of maximizing the difference of two convex functions. We present a randomized method that requires low computational effort at each iteration. The described method is a randomized coordinate descent method employed on the so-called Toland-dual problem. We prove subsequence convergence to dual stationarity points, a new notion that we introduce and which is shown to be tighter than standard criticality. An almost sure rate of convergence of an optimality measure of the dual sequence is proven. We demonstrate the potential of our results on three principal component analysis models resulting in extremely simple algorithms.
Original language | English |
---|---|
Pages (from-to) | 1877-1896 |
Number of pages | 20 |
Journal | SIAM Journal on Optimization |
Volume | 31 |
Issue number | 3 |
DOIs | |
State | Published - 2021 |
Keywords
- Dual coordinate descent
- Stationarity
- Toland duality
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science