TY - GEN
T1 - What You Can Learn by Staring at a Blank Wall
AU - Sharma, Prafull
AU - Aittala, Miika
AU - Schechner, Yoav Y.
AU - Torralba, Antonio
AU - Wornell, Gregory W.
AU - Freeman, William T.
AU - Durand, Frédo
N1 - Publisher Copyright: © 2021 IEEE
PY - 2021
Y1 - 2021
N2 - We present a passive non-line-of-sight method that infers the number of people or activity of a person from the observation of a blank wall in an unknown room. Our technique analyzes complex imperceptible changes in indirect illumination in a video of the wall to reveal a signal that is correlated with motion in the hidden part of a scene. We use this signal to classify between zero, one, or two moving people, or the activity of a person in the hidden scene. We train two convolutional neural networks using data collected from 20 different scenes, and achieve an accuracy of ≈ 94% for both tasks in unseen test environments and real-time online settings. Unlike other passive non-line-of-sight methods, the technique does not rely on known occluders or controllable light sources, and generalizes to unknown rooms with no recalibration. We analyze the generalization and robustness of our method with both real and synthetic data, and study the effect of the scene parameters on the signal quality.
AB - We present a passive non-line-of-sight method that infers the number of people or activity of a person from the observation of a blank wall in an unknown room. Our technique analyzes complex imperceptible changes in indirect illumination in a video of the wall to reveal a signal that is correlated with motion in the hidden part of a scene. We use this signal to classify between zero, one, or two moving people, or the activity of a person in the hidden scene. We train two convolutional neural networks using data collected from 20 different scenes, and achieve an accuracy of ≈ 94% for both tasks in unseen test environments and real-time online settings. Unlike other passive non-line-of-sight methods, the technique does not rely on known occluders or controllable light sources, and generalizes to unknown rooms with no recalibration. We analyze the generalization and robustness of our method with both real and synthetic data, and study the effect of the scene parameters on the signal quality.
UR - http://www.scopus.com/inward/record.url?scp=85127782106&partnerID=8YFLogxK
U2 - 10.1109/ICCV48922.2021.00233
DO - 10.1109/ICCV48922.2021.00233
M3 - منشور من مؤتمر
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2310
EP - 2319
BT - Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
T2 - 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Y2 - 11 October 2021 through 17 October 2021
ER -