TY - JOUR
T1 - A Cross-Linguistic Validation of the Test for Rating Emotions in Speech
T2 - Acoustic Analyses of Emotional Sentences in English, German, and Hebrew
AU - Carl, Micalle
AU - Icht, Michal
AU - Ben-David, Boaz M.
N1 - Publisher Copyright: © 2022 American Speech-Language-Hearing Association.
PY - 2022/3
Y1 - 2022/3
N2 - Purpose: The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happi-ness, sadness, and neutral) in both semantics and prosody in different combi-nations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. Method: T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. Results: Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. Conclusions: The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners’ ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.
AB - Purpose: The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happi-ness, sadness, and neutral) in both semantics and prosody in different combi-nations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. Method: T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. Results: Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. Conclusions: The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners’ ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.
UR - http://www.scopus.com/inward/record.url?scp=85126072607&partnerID=8YFLogxK
U2 - https://doi.org/10.1044/2021_JSLHR-21-00205
DO - https://doi.org/10.1044/2021_JSLHR-21-00205
M3 - مقالة
C2 - 35171689
SN - 1092-4388
VL - 65
SP - 991
EP - 1000
JO - Journal of Speech, Language, and Hearing Research
JF - Journal of Speech, Language, and Hearing Research
IS - 3
ER -