TY - GEN
T1 - A depth restoration occlusionless temporal dataset
AU - Rotman, Daniel
AU - Gilboa, Guy
N1 - Publisher Copyright: © 2016 IEEE.
PY - 2016/12/15
Y1 - 2016/12/15
N2 - Depth restoration, the task of correcting depth noise and artifacts, has recently risen in popularity due to the increase in commodity depth cameras. When assessing the quality of existing methods, most researchers resort to the popular Middlebury dataset, however, this dataset was not created for depth enhancement, and therefore lacks the option of comparing genuine low-quality depth images with their high-quality, ground-truth counterparts. To address this shortcoming, we present the Depth Restoration Occlusionless Temporal (DROT) dataset. This dataset offers real depth sensor input coupled with registered pixel-to-pixel color images, and the ground-truth depth to which we wish to compare. Our dataset includes not only Kinect 1 and Kinect 2 data, but also an Intel R200 sensor intended for integration into hand-held devices. Beyond this, we present a new temporal depth-restoration method. Utilizing multiple frames, we create a number of possibilities for an initial degraded depth map, which allows us to arrive at a more educated decision when refining depth images. Evaluating this method with our dataset shows significant benefits, particularly for overcoming real sensor-noise artifacts.
AB - Depth restoration, the task of correcting depth noise and artifacts, has recently risen in popularity due to the increase in commodity depth cameras. When assessing the quality of existing methods, most researchers resort to the popular Middlebury dataset, however, this dataset was not created for depth enhancement, and therefore lacks the option of comparing genuine low-quality depth images with their high-quality, ground-truth counterparts. To address this shortcoming, we present the Depth Restoration Occlusionless Temporal (DROT) dataset. This dataset offers real depth sensor input coupled with registered pixel-to-pixel color images, and the ground-truth depth to which we wish to compare. Our dataset includes not only Kinect 1 and Kinect 2 data, but also an Intel R200 sensor intended for integration into hand-held devices. Beyond this, we present a new temporal depth-restoration method. Utilizing multiple frames, we create a number of possibilities for an initial degraded depth map, which allows us to arrive at a more educated decision when refining depth images. Evaluating this method with our dataset shows significant benefits, particularly for overcoming real sensor-noise artifacts.
KW - 3.0
KW - Dataset
KW - Depth
KW - Restoration
KW - Upsampling
KW - temporal
UR - http://www.scopus.com/inward/record.url?scp=85011310904&partnerID=8YFLogxK
U2 - https://doi.org/10.1109/3DV.2016.26
DO - https://doi.org/10.1109/3DV.2016.26
M3 - منشور من مؤتمر
T3 - Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016
SP - 176
EP - 184
BT - Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016
T2 - 4th International Conference on 3D Vision, 3DV 2016
Y2 - 25 October 2016 through 28 October 2016
ER -