Dominant speaker identification for multipoint videoconferencing

Ilana Volfin, Israel Cohen

Research output: Contribution to journalArticlepeer-review

Abstract

A multi-point conference is an efficient and cost effective substitute for a face to face meeting. It involves three or more participants placed in separate locations, where each participant employs a single microphone and camera. The routing and processing of the audiovisual information is very demanding on the network. This raises a need for reducing the amount of information that flows through the system. One solution is to identify the dominant speaker and partially discard information originating from non-active participants. We propose a novel method for dominant speaker identification using speech activity information from time intervals of different lengths. The proposed method processes the audio signal of each participant independently and computes speech activity scores for the immediate, medium and long time-intervals. These scores are compared and the dominant speaker is identified. In comparison to other speaker selection methods, experimental results demonstrate reduction in the number of false speaker switches and improved robustness to transient audio interferences.

Original languageEnglish
Pages (from-to)895-910
Number of pages16
JournalComputer Speech and Language
Volume27
Issue number4
DOIs
StatePublished - 2013

Keywords

  • Acoustic noise
  • Acoustic signal detection
  • Dominant speaker identification
  • Speech processing
  • Transient noise
  • Videoconference

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Dominant speaker identification for multipoint videoconferencing'. Together they form a unique fingerprint.

Cite this