High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize

Ali Kavis, Kfir Y. Levy, Volkan Cevher

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems. More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a convergence rate of O(1/T) with high probability without the knowledge of smoothness and variance. We use a particular version of Freedman's concentration bound for martingale difference sequences (Kakade & Tewari, 2008) which enables us to achieve the best-known dependence of log(1/δ) on the probability margin δ. We present our analysis in a modular way and obtain a complementary O(1/T) convergence rate in the deterministic setting. To the best of our knowledge, this is the first high probability result for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness and uniform variance bound, which simultaneously has best-known dependence of log(1/δ). We further prove noise adaptation property of AdaGrad under additional noise assumptions.

Original languageEnglish
StatePublished - 2022
Event10th International Conference on Learning Representations, ICLR 2022 - Virtual, Online
Duration: 25 Apr 202229 Apr 2022

Conference

Conference10th International Conference on Learning Representations, ICLR 2022
CityVirtual, Online
Period25/04/2229/04/22

All Science Journal Classification (ASJC) codes

  • Education
  • Language and Linguistics
  • Computer Science Applications
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize'. Together they form a unique fingerprint.

Cite this