Compressed Sensing

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Compressed sensing (CS) is an exciting, rapidly growing field that has attracted considerable attention in electrical engineering, applied mathematics, statistics, and computer science. CS offers a framework for simultaneous sensing and compression of finitedimensional vectors that relies on linear dimensionality reduction. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered from highly incomplete measurements by using efficient algorithms. To be more specific, let x be an n-vector. In CS we do not measure x directly but instead acquire m<n linear measurements of the form y = Ax using an m × n CS matrix A. Ideally, the matrix is designed to reduce the number of measurements as much as possible while allowing for recovery of a wide class of signals from their measurement vectors y. Thus, we would like to choose m n. Since A has fewer rows than columns, it has a nonempty null space. This implies that for any particular signal x0, an infinite number of signals x yield the same measurements y = Ax = Ax0. To enable recovery, we must therefore limit ourselves to a special class of input signals x. Sparsity is the most prevalent signal structure used in CS. In its simplest form, sparsity implies that x has only a small number of nonzero values but we do not know which entries are nonzero. Mathematically, we express this condition as x0 k, where x0 denotes the -0-“norm” of x, which counts the number of nonzeros in x (note that ·0 is not a true norm, since in general αx0 = |α| x0 for α ∈ R). More generally, CS ideas can be applied when a suitable representation of x is sparse. A signal x is k sparse in a basis Ψ if there exists a vector θ ∈ Rn with only k n nonzero entries such that x = Ψ θ. As an example, the success of many compression algorithms, such as jpeg 2000 [VII.7 §5], is tied to the fact that natural images are often sparse in an appropriate wavelet transform. Finding a sparse vector x that satisfies the measurement equation y = Ax can be performed by an exhaustive search over all possible sets of size k. In general, however, this is impractical; in fact, the task of finding such an x is known to be np-hard [I.4 §4.1]. The surprising result at the heart of CS is that, if x (or a suitable representation of x) is k-sparse, then it can be recovered from y = Ax using a number of measurements m that is on the order of k log n, under certain conditions on the matrix A. Furthermore, recovery is possible using polynomial-time algorithms that are robust to noise and mismodeling of x. In particular, the essential results hold when x is compressible, namely, when it is well approximated by its best k-term representationminv0k x − v, where the norm in the objective is arbitrary. CS has led to a fundamentally new approach to signal processing, analog-to-digital converter (ADC) design, image recovery, and compression algorithms. Consumer electronics, civilian and military surveillance, medical imaging, radar, and many other applications rely on efficient sampling. Reducing the sampling rate in these applications by making efficient use of the available degrees of freedom can improve the user experience; increase data transfer; improve imaging quality; and reduce power, cost, and exposure time.
Original languageEnglish
Title of host publicationThe Princeton Companion to Applied Mathematics
EditorsNicholas J. Higham, Mark R. Dennis, Paul Glendinning, Paul A. Martin, Fadil Santosa, Jared Tanner
ChapterVII.10
Pages823-827
Number of pages5
ISBN (Electronic)9781400874477
DOIs
StatePublished - 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Compressed Sensing'. Together they form a unique fingerprint.

Cite this