Online Training of Stereo Self-Calibration Using Monocular Depth Estimation

Yotam Gil, Shay Elmalem, Harel Haim, Emanuel Marom, Raja Giryes

Research output: Contribution to journalArticlepeer-review

Abstract

Stereo imaging is the most common passive method for producing reliable depth maps. Calibration is a crucial step for every stereo-based system, and despite all the advancements in the field, most calibrations are still done by the same tedious method using a checkerboard target. Monocular-based depth estimation methods do not require extrinsic calibration but generally achieve inferior depth accuracy. In this paper, we present a novel online self-calibration method, which makes use of both stereo and monocular depth maps to find the transformation required for extrinsic calibration by enforcing consistency between both maps. The proposed method works in a closed-loop and exploits the pre-trained networks' global context, and thus avoids feature matching and outliers issues. In addition to presenting our method using an image-based monocular depth estimation method, which can be implemented in most systems without additional changes, we also show that adding a phase-coded aperture mask leads to even better and faster convergence. We demonstrate our method on road scenes from the KITTI vision benchmark and real-world scenes using our prototype camera. Our code is publicly available at https://github.com/YotYot/CalibrationNet.

Original languageEnglish
Article number9495157
Pages (from-to)812-823
Number of pages12
JournalIEEE TRANSACTIONS ON COMPUTATIONAL IMAGING
Volume7
DOIs
StatePublished - 2021

Keywords

  • Stereo imaging
  • calibration
  • consistency loss
  • monocular depth estimation
  • unsupervised learning

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Science Applications
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'Online Training of Stereo Self-Calibration Using Monocular Depth Estimation'. Together they form a unique fingerprint.

Cite this