Abstract
Speech enhancement and separation are core problems in audio signal processing, with commercial applications in devices as diverse as mobile phones, conference call systems, hands-free systems, or hearing aids. In addition, they are crucial preprocessing steps for noise-robust automatic speech and speaker recognition. Many devices now have two to eight microphones. The enhancement and separation capabilities offered by these multichannel interfaces are usually greater than those of single-channel interfaces. Research in speech enhancement and separation has followed two convergent paths, starting with microphone array processing and blind source separation, respectively. These communities are now strongly interrelated and routinely borrow ideas from each other. Yet, a comprehensive overview of the common foundations and the differences between these approaches is lacking at present. In this paper, we propose to fill this gap by analyzing a large number of established and recent techniques according to four transverse axes: 1) the acoustic impulse response model, 2) the spatial filter design criterion, 3) the parameter estimation algorithm, and 4) optional postfiltering. We conclude this overview paper by providing a list of software and data resources and by discussing perspectives and future trends in the field.
Original language | English |
---|---|
Pages (from-to) | 692-730 |
Number of pages | 39 |
Journal | IEEE/ACM Transactions on Audio Speech and Language Processing |
Volume | 25 |
Issue number | 4 |
DOIs | |
State | Published - Apr 2017 |
Keywords
- Array processing
- Beamforming
- Expectation-maximization
- Independent component analysis
- Multichannel
- Postfiltering
- Sparse component analysis
- Wiener filter
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- Computational Mathematics
- Electrical and Electronic Engineering
- Acoustics and Ultrasonics