Abstract
A multi-point conference is an efficient and cost effective substitute for a face to face meeting. It involves three or more participants placed in separate locations, where each participant employs a single microphone and camera. The routing and processing of the audiovisual information is very demanding on the network. This raises a need for reducing the amount of information that flows through the system. One solution is to identify the dominant speaker and partially discard information originating from non-active participants. We propose a novel method for dominant speaker identification using speech activity information from time intervals of different lengths. The proposed method processes the audio signal of each participant independently and computes speech activity scores for the immediate, medium and long time-intervals. These scores are compared and the dominant speaker is identified. In comparison to other speaker selection methods, experimental results demonstrate reduction in the number of false speaker switches and improved robustness to transient audio interferences.
Original language | English |
---|---|
Pages (from-to) | 895-910 |
Number of pages | 16 |
Journal | Computer Speech and Language |
Volume | 27 |
Issue number | 4 |
DOIs | |
State | Published - 2013 |
Keywords
- Acoustic noise
- Acoustic signal detection
- Dominant speaker identification
- Speech processing
- Transient noise
- Videoconference
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Human-Computer Interaction