Whodunnit - Searching for the most important feature types signalling emotion-related user states in speech

Anton Batliner, Stefan Steidl, Björn Schuller, Dino Seppi, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Vered Aharonson, Loic Kessous, Noam Amir

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we describe and interpret a set of acoustic and linguistic features that characterise emotional/emotion-related user states - confined to the one database processed: four classes in a German corpus of children interacting with a pet robot. To this end, we collected a very large feature vector consisting of more than 4000 features extracted at different sites. We performed extensive feature selection (Sequential Forward Floating Search) for seven acoustic and four linguistic types of features, ending up in a small number of 'most important' features which we try to interpret by discussing the impact of different feature and extraction types. We establish different measures of impact and discuss the mutual influence of acoustics and linguistics.

Original languageEnglish
Pages (from-to)4-28
Number of pages25
JournalComputer Speech and Language
Volume25
Issue number1
DOIs
StatePublished - Jan 2011

Keywords

  • Automatic classification
  • Emotion
  • Feature selection
  • Feature types

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Whodunnit - Searching for the most important feature types signalling emotion-related user states in speech'. Together they form a unique fingerprint.

Cite this