Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We study the conditions under which one is able to efficiently apply variance-reduction and acceleration schemes on finite sum optimization problems. First, we show that, perhaps surprisingly, the finite sum structure by itself, is not sufficient for obtaining a complexity bound of (O) over tilde((n + L/mu) ln(1/epsilon)) for L-smooth and mu-strongly convex individual functions - one must also know which individual function is being referred to by the oracle at each iteration. Next, we show that for a broad class of first-order and coordinate-descent finite sum algorithms (including, e.g., SDCA, SVRG, SAG), it is not possible to get an 'accelerated' complexity bound of (O) over tilde((n + root nL/mu) ln(1/epsilon)), unless the strong convexity parameter is given explicitly. Lastly, we show that when this class of algorithms is used for minimizing L-smooth and convex finite sums, the iteration complexity is bounded from below by Omega(n + L/epsilon), assuming that (on average) the same update rule is used in any iteration, and Omega(n + root nL/epsilon) otherwise.

Original languageEnglish
Title of host publicationADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017)
Editors Guyon, UV Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett
Number of pages10
StatePublished - 2017
Event31st Conference on Neural Information Processing Systems - Long Beach Convention Center, Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31st

Publication series

NameAdvances in Neural Information Processing Systems
Volume30
ISSN (Print)1049-5258

Conference

Conference31st Conference on Neural Information Processing Systems
Abbreviated titleNIPS'17
Country/TerritoryUnited States
CityLong Beach
Period4/12/179/12/17

Cite this