Sparse algorithms are not stable: A no-free-lunch theorem

Huan Xu, Constantine Caramanis, Shie Mannor

Research output: Contribution to journalArticlepeer-review

Abstract

We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that l1-regularized regression (Lasso) cannot be stable, while l2-regularized regression is known to have strong stability properties and is therefore not sparse.

Original languageEnglish
Article number5989836
Pages (from-to)187-193
Number of pages7
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume34
Issue number1
DOIs
StatePublished - 2012

Keywords

  • Lasso
  • Stability
  • regularization
  • sparsity

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Sparse algorithms are not stable: A no-free-lunch theorem'. Together they form a unique fingerprint.

Cite this