Characterizing implicit bias in terms of optimization geometry

Suriya Gunasekar, Jason Lee, Daniel Soudry, Nathan Srebro

Research output: Contribution to journalConference articlepeer-review


We study the implicit bias of generic optimization methods, including mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing under determined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by optimization can be characterized in terms of the potential or norm of the optimization geometry, and independently of hypcrparameter choices such as step size and momentum.

Original languageEnglish
Pages (from-to)2932-2955
Number of pages24
JournalProceedings of Machine Learning Research
StatePublished - 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software


Dive into the research topics of 'Characterizing implicit bias in terms of optimization geometry'. Together they form a unique fingerprint.

Cite this