Direct shape recovery from photometric stereo with shadows

Roberto Mecca, Aaron Wetzler, Ron Kimmel, Alfred Marcel Bruckstein

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Reconstruction of 3D objects Based on images is useful in many applications. One of the methods Based on multi-image data is the Photometric Stereo technique relying on several photographs of the observed object from the same point of view, each one taken under a different illumination condition. The common approach is to estimate the gradient field of the surface by minimizing a functional, integrating the distance from the camera and thereby obtaining the geometry of the observed object. We propose an alternative method that consists of a novel differential approach for multi-image Photometric Stereo and permits a direct solution of a novel PDE Based model without going through the gradient field while naturally dealing with shadowed regions. The mathematical well-posed ness of the problem in terms of numerical stability yields a fast algorithm that efficiently converges, even for pictures of sizes in the order of several mega pixels affected by noise.

Original languageEnglish
Title of host publicationProceedings - 2013 International Conference on 3D Vision, 3DV 2013
Pages382-389
Number of pages8
DOIs
StatePublished - 2013
Event2013 International Conference on 3D Vision, 3DV 2013 - Seattle, WA, United States
Duration: 29 Jun 20131 Jul 2013

Publication series

NameProceedings - 2013 International Conference on 3D Vision, 3DV 2013

Conference

Conference2013 International Conference on 3D Vision, 3DV 2013
Country/TerritoryUnited States
CitySeattle, WA
Period29/06/131/07/13

Keywords

  • Partial Differential Equations
  • Photometric Stereo
  • Shadows

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Direct shape recovery from photometric stereo with shadows'. Together they form a unique fingerprint.

Cite this