Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes

Dan Qiao, Kaiqi Zhang, Esha Singh, Daniel Soudry, Yu Xiang Wang

Research output: Contribution to journalConference articlepeer-review

Abstract

We study the generalization of two-layer ReLU neural networks in a univariate nonparametric regression problem with noisy labels. This is a problem where kernels (e.g. NTK) are provably sub-optimal and benign overfitting does not happen, thus disqualifying existing theory for interpolating (0-loss, global optimal) solutions. We present a new theory of generalization for local minima that gradient descent with a constant learning rate can stably converge to. We show that gradient descent with a fixed learning rate η can only find local minima that represent smooth functions with a certain weighted first order total variation bounded by 1/η − 1/2 + Oe(σ + MSE) where σ is the label noise level, MSE is short for mean squared error against the ground truth, and Oe(·) hides a logarithmic factor. Under mild assumptions, we also prove a nearly-optimal MSE bound of Oe(n−4/5) within the strict interior of the support of the n data points. Our theoretical results are validated by extensive simulation that demonstrates large learning rate training induces sparse linear spline fits. To the best of our knowledge, we are the first to obtain generalization bound via minima stability in the non-interpolation case and the first to show ReLU NNs without regularization can achieve near-optimal rates in nonparametric regression.

Original languageEnglish
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes'. Together they form a unique fingerprint.

Cite this