Skip to main navigation Skip to search Skip to main content

Deconvolving Diffraction for Fast Imaging of Sparse Scenes

Mark Sheinin, Matthew O'Toole, Srinivasa G. Narasimhan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Most computer vision techniques rely on cameras which uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely (e.g., marker-based motion capture), and thus most pixels of the 2D camera sensor would be wasted. Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called 'Diffraction Line Imaging', relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.

Original languageEnglish
Title of host publication2021 IEEE International Conference on Computational Photography, ICCP 2021
Number of pages10
ISBN (Electronic)9781665419529
DOIs
StatePublished - 23 May 2021
Externally publishedYes
Event13th IEEE International Conference on Computational Photography, ICCP 2021 - Haifa, Israel
Duration: 23 May 202125 May 2021

Publication series

Name2021 IEEE International Conference on Computational Photography, ICCP 2021

Conference

Conference13th IEEE International Conference on Computational Photography, ICCP 2021
Country/TerritoryIsrael
CityHaifa
Period23/05/2125/05/21

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Instrumentation

Fingerprint

Dive into the research topics of 'Deconvolving Diffraction for Fast Imaging of Sparse Scenes'. Together they form a unique fingerprint.

Cite this