Abstract
In this work, we present our strategy for camera control in dynamic scenes with multiple people (sports teams). We learn a generic model of the player dynamics offline in simulation. We use only a few sparse demonstrations of a user's camera control policy to learn a reward function to drive camera motion in an ongoing dynamic scene. Key to our approach is the creation of a low-dimensional representation of the scene dynamics which is independent of the environment action and rewards, which enables learning the reward function using only a small number of examples. We cast the user-specific control objective as an inverse reinforcement learning problem, aiming to learn an expert's intention from a small number of demonstrations. The learned reward function is used in combination with a visual model predictive controller (MPC). We learn a generic scene dynamics model that is agnostic to the user-specific reward, enabling reusing the same dynamics model for different camera control policies. We show the effectiveness of our method on simulated and real soccer matches.
Original language | English |
---|---|
Pages (from-to) | 427-437 |
Number of pages | 11 |
Journal | Computer Graphics Forum |
Volume | 41 |
Issue number | 1 |
DOIs | |
State | Published - Feb 2022 |
Keywords
- animation
- control
- motion planning
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design