The Complexity of Making the Gradient Small in Stochastic Convex Optimization

Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake Woodworth

Research output: Contribution to journalConference articlepeer-review

Abstract

We give nearly matching upper and lower bounds on the oracle complexity of finding ε-stationary points (∥∇F(x)∥ ≤ ε) in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding near-stationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by Allen-Zhu (2018). We show how to extend the technique to achieve near-optimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computational-statistical tradeoff.

Original languageEnglish
Pages (from-to)1319-1345
Number of pages27
JournalProceedings of Machine Learning Research
Volume99
StatePublished - 2019
Event32nd Conference on Learning Theory, COLT 2019 - Phoenix, United States
Duration: 25 Jun 201928 Jun 2019
https://proceedings.mlr.press/v99

Keywords

  • nonconvex optimization
  • oracle complexity
  • sample complexity
  • stationary point
  • stochastic optimization

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'The Complexity of Making the Gradient Small in Stochastic Convex Optimization'. Together they form a unique fingerprint.

Cite this