TY - GEN
T1 - Self-supervised contrastive learning for unsupervised phoneme segmentation
AU - Kreuk, Felix
AU - Keshet, Joseph
AU - Adi, Yossi
N1 - Publisher Copyright: © 2020 ISCA
PY - 2020
Y1 - 2020
N2 - We propose a self-supervised representation learning model for the task of unsupervised phoneme boundary detection. The model is a convolutional neural network that operates directly on the raw waveform. It is optimized to identify spectral changes in the signal using the Noise-Contrastive Estimation principle. At test time, a peak detection algorithm is applied over the model outputs to produce the final boundaries. As such, the proposed model is trained in a fully unsupervised manner with no manual annotations in the form of target boundaries nor phonetic transcriptions. We compare the proposed approach to several unsupervised baselines using both TIMIT and Buckeye corpora. Results suggest that our approach surpasses the baseline models and reaches state-of-the-art performance on both data sets. Furthermore, we experimented with expanding the training set with additional examples from the Librispeech corpus. We evaluated the resulting model on distributions and languages that were not seen during the training phase (English, Hebrew and German) and showed that utilizing additional untranscribed data is beneficial for model performance. Our implementation is available at: https://github.com/felixkreuk/UnsupSeg.
AB - We propose a self-supervised representation learning model for the task of unsupervised phoneme boundary detection. The model is a convolutional neural network that operates directly on the raw waveform. It is optimized to identify spectral changes in the signal using the Noise-Contrastive Estimation principle. At test time, a peak detection algorithm is applied over the model outputs to produce the final boundaries. As such, the proposed model is trained in a fully unsupervised manner with no manual annotations in the form of target boundaries nor phonetic transcriptions. We compare the proposed approach to several unsupervised baselines using both TIMIT and Buckeye corpora. Results suggest that our approach surpasses the baseline models and reaches state-of-the-art performance on both data sets. Furthermore, we experimented with expanding the training set with additional examples from the Librispeech corpus. We evaluated the resulting model on distributions and languages that were not seen during the training phase (English, Hebrew and German) and showed that utilizing additional untranscribed data is beneficial for model performance. Our implementation is available at: https://github.com/felixkreuk/UnsupSeg.
KW - Contrastive Noise Estimation
KW - Self-Supervised Learning
KW - Unsupervised Phoneme Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85098111990&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2020-2398
DO - 10.21437/Interspeech.2020-2398
M3 - منشور من مؤتمر
SN - 9781713820697
T3 - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SP - 3700
EP - 3704
BT - Interspeech 2020
T2 - 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
Y2 - 25 October 2020 through 29 October 2020
ER -