TY - GEN
T1 - Volumetric Disentanglement for 3D Scene Manipulation
AU - Benaim, Sagie
AU - Warburg, Frederik
AU - Christensen, Peter Ebert
AU - Belongie, Serge
N1 - Publisher Copyright: © 2024 IEEE.
PY - 2024/1/3
Y1 - 2024/1/3
N2 - Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate our framework's applicability on several downstream manipulation tasks, going beyond the placement and movement of foreground objects. These tasks include object camouflage, non-negative 3D object in-painting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation. The project webpage is provided in https://sagiebenaim.github.io/volumetric-disentanglement/.
AB - Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate our framework's applicability on several downstream manipulation tasks, going beyond the placement and movement of foreground objects. These tasks include object camouflage, non-negative 3D object in-painting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation. The project webpage is provided in https://sagiebenaim.github.io/volumetric-disentanglement/.
KW - 3D computer vision
KW - Algorithms
KW - Applications
KW - Virtual / augmented reality
UR - http://www.scopus.com/inward/record.url?scp=85191976137&partnerID=8YFLogxK
U2 - 10.1109/wacv57701.2024.00847
DO - 10.1109/wacv57701.2024.00847
M3 - منشور من مؤتمر
T3 - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
SP - 8652
EP - 8662
BT - Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024
Y2 - 4 January 2024 through 8 January 2024
ER -