On the Minimal Overcompleteness Allowing Universal Sparse Representation

Rotem Mulayoff, Tomer Michaeli

Research output: Contribution to journalArticlepeer-review

Abstract

Sparse representation over redundant dictionaries constitute a good model for many classes of signals (e.g., patches of natural images, segments of speech signals, and so on). However, despite its popularity, a very little are known about the representation capacity of this model. In this paper, we study how redundant a dictionary must be so as to allow any vector to admit a sparse approximation with a prescribed sparsity and a prescribed level of accuracy. We address this problem, both in a worst-case setting and in an average-case one. For each scenario, we derive lower and upper bounds on the minimal required overcompleteness. Our bounds have simple closed-form expressions that allow to easily deduce the asymptotic behavior in large dimensions. In particular, we find that the required overcompleteness grows exponentially with the sparsity level and polynomially with the allowed representation error. This implies that universal sparse representation is practical only at moderate sparsity levels, but can be achieved with a relatively high accuracy. As a side effect of our analysis, we obtain a tight lower bound on the regularized incomplete beta function, which may be interesting in its own right. We illustrate the validity of our results through numerical simulations, which support our findings.

Original languageEnglish
Article number8669883
Pages (from-to)3585-3599
Number of pages15
JournalIEEE Transactions on Information Theory
Volume65
Issue number6
DOIs
StatePublished - Jun 2019

Keywords

  • Beta distribution
  • covering number
  • frames
  • high dimensional geometry
  • n-sphere
  • sparse approximation
  • sparsity bounds

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Cite this