TY - GEN
T1 - 3Deflicker from motion
AU - Swirski, Yohay
AU - Schechner, Yoav Y.
PY - 2013
Y1 - 2013
N2 - Spatio-temporal irradiance variations are created by some structured light setups. They also occur naturally underwater, where they are termed flicker. Underwater, visibility is also affected by water scattering. Methods for overcoming or exploiting flicker or scatter exist, when the imaging geometry is static or quasi-static. This work removes the need for quasi-static scene-object geometry under flickering illumination. A scene is observed from a free moving platform that carries standard frame-rate stereo cameras. The 3D scene structure is illumination invariant. Thus, as a reference for motion estimation, we use projections of stereoscopic range maps, rather than object radiance. Consequently, each object point can be tracked and then filtered in time, yielding deflickered videos. Moreover, since objects are viewed from different distances as the stereo rig moves, scattering effects on the images are modulated. This modulation, the recovered camera poses, 3D structure and de-flickered images yield inversion of scattering and recovery of the water attenuation coefficient. Thus, coupled difficult problems are solved in a single framework. This is demonstrated in underwater field experiments and in a lab.
AB - Spatio-temporal irradiance variations are created by some structured light setups. They also occur naturally underwater, where they are termed flicker. Underwater, visibility is also affected by water scattering. Methods for overcoming or exploiting flicker or scatter exist, when the imaging geometry is static or quasi-static. This work removes the need for quasi-static scene-object geometry under flickering illumination. A scene is observed from a free moving platform that carries standard frame-rate stereo cameras. The 3D scene structure is illumination invariant. Thus, as a reference for motion estimation, we use projections of stereoscopic range maps, rather than object radiance. Consequently, each object point can be tracked and then filtered in time, yielding deflickered videos. Moreover, since objects are viewed from different distances as the stereo rig moves, scattering effects on the images are modulated. This modulation, the recovered camera poses, 3D structure and de-flickered images yield inversion of scattering and recovery of the water attenuation coefficient. Thus, coupled difficult problems are solved in a single framework. This is demonstrated in underwater field experiments and in a lab.
UR - http://www.scopus.com/inward/record.url?scp=84881076259&partnerID=8YFLogxK
U2 - 10.1109/ICCPhot.2013.6528294
DO - 10.1109/ICCPhot.2013.6528294
M3 - منشور من مؤتمر
SN - 9781467364645
T3 - 2013 IEEE International Conference on Computational Photography, ICCP 2013
BT - 2013 IEEE International Conference on Computational Photography, ICCP 2013
T2 - 2013 5th IEEE International Conference on Computational Photography, ICCP 2013
Y2 - 19 April 2013 through 21 April 2013
ER -