Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks

Roey Magen, Ohad Shamir

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix W0 is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems
Subtitle of host publicationNeurIPS 2023
EditorsA Oh, T Neumann, A Globerson, K Saenko, M Hardt, S Levine
Pages7632-7658
Number of pages27
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258

Conference

Conference37th Conference on Neural Information Processing Systems, NeurIPS 2023
Country/TerritoryUnited States
CityNew Orleans
Period10/12/2316/12/23

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks'. Together they form a unique fingerprint.

Cite this