Viewpoint selection for human actions

Dmitry Rudoy, Lihi Zelnik-Manor

Research output: Contribution to journalReview articlepeer-review

Abstract

In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing positions. Visualizing such multi-view data on a single display raises an immediate question-which cameras capture better views of the scene? Typically, (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we wish to automate this process by evaluating the quality of a view, captured by every single camera. We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is then evaluated based on features of the space-time shape, which correspond with limb visibility. Resting on these features, two view quality approaches are proposed. One is generic while the other can be trained to fit any preferred action recognition method. Our experiments show that the proposed view selection provide intuitive results which match common conventions. We further show that it improves action recognition results.

Original languageEnglish
Pages (from-to)243-254
Number of pages12
JournalInternational Journal of Computer Vision
Volume97
Issue number3
DOIs
StatePublished - May 2012

Keywords

  • Human actions
  • Multiple viewpoints
  • Video analysis
  • Viewpoint selection

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Viewpoint selection for human actions'. Together they form a unique fingerprint.

Cite this