Theoretical Perspectives on Deep Learning Methods in Inverse Problems

Jonathan Scarlett, Reinhard Heckel, Miguel R.D. Rodrigues, Paul Hand, Yonina C. Eldar

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution. While this line of works has predominantly been driven by practical algorithms and experiments, it has also given rise to a variety of intriguing theoretical problems. In this paper, we survey some of the prominent theoretical developments in this line of works, focusing in particular on generative priors, untrained neural network priors, and unfolding algorithms. In addition to summarizing existing results in these topics, we highlight several ongoing challenges and open problems.

Original languageEnglish
Pages (from-to)433-453
Number of pages21
JournalIEEE journal on selected areas in information theory
Volume3
Issue number3
DOIs
StatePublished - 1 Sep 2022

Keywords

  • compressive sensing
  • denoising
  • generative priors
  • information-theoretic limits
  • inverse problems
  • theoretical guarantees
  • unfolding algorithms
  • untrained neural networks

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Media Technology
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Theoretical Perspectives on Deep Learning Methods in Inverse Problems'. Together they form a unique fingerprint.

Cite this