Limitations on variance-reduction and acceleration schemes for finite sum optimization

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the conditions under which one is able to efficiently apply variance-reduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of Õ((n + L/μ) ln(1/ϵ)) for L-smooth and μ-strongly convex individual functions - one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an 'accelerated' complexity bound of Õ((n+ √nL/μ) ln(1/ϵ)), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by Ω(n + L/ϵ), assuming that (on average) the same update rule is used in any iteration, and Ω(n + √nL/ϵ) otherwise.

Original languageEnglish
Pages (from-to)3541-3550
Number of pages10
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - 2017
Externally publishedYes
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this