Abstract
We extract a controllable model from a video of a person performing a certain activity. The model generates novel image sequences of that person, according to user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first maps a current pose, and a single-instance control signal to the next pose. The second maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.
| Original language | English |
|---|---|
| State | Published - 2020 |
| Event | 8th International Conference on Learning Representations, ICLR 2020 - Addis Ababa, Ethiopia Duration: 30 Apr 2020 → … |
Conference
| Conference | 8th International Conference on Learning Representations, ICLR 2020 |
|---|---|
| Country/Territory | Ethiopia |
| City | Addis Ababa |
| Period | 30/04/20 → … |
All Science Journal Classification (ASJC) codes
- Education
- Linguistics and Language
- Language and Linguistics
- Computer Science Applications
Fingerprint
Dive into the research topics of 'VID2GAME: CONTROLLABLE CHARACTERS EXTRACTED FROM REAL-WORLD VIDEOS'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver