Posing to the camera: Automatic viewpoint selection for human actions

Dmitry Rudoy, Lihi Zelnik-Manor

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In many scenarios a scene is filmed by multiple video cameras located at different viewing positions. The difficulty in watching multiple views simultaneously raises an immediate question - which cameras capture better views of the dynamic scene? When one can only display a single view (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we propose a method for evaluating the quality of a view, captured by a single camera. This can be used to automate viewpoint selection. We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is evaluated by incorporating three measures that capture the visibility of the action provided by these space-time shapes. We evaluate the proposed approach both qualitatively and quantitatively.

Original languageEnglish
Title of host publicationComputer Vision, ACCV 2010 - 10th Asian Conference on Computer Vision, Revised Selected Papers
Pages307-320
Number of pages14
EditionPART 4
DOIs
StatePublished - 2011
Event10th Asian Conference on Computer Vision, ACCV 2010 - Queenstown, New Zealand
Duration: 8 Nov 201012 Nov 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 4
Volume6495 LNCS

Conference

Conference10th Asian Conference on Computer Vision, ACCV 2010
Country/TerritoryNew Zealand
CityQueenstown
Period8/11/1012/11/10

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Posing to the camera: Automatic viewpoint selection for human actions'. Together they form a unique fingerprint.

Cite this