TY - GEN
T1 - Automatic selection of context configurations for improved class-specific word representations
AU - Vulić, Ivan
AU - Schwartz, Roy
AU - Rappoport, Ari
AU - Reichart, Roi
AU - Korhonen, Anna
N1 - Publisher Copyright: © 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman’s ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.
AB - This paper is concerned with identifying contexts useful for training word representation models for different word classes such as adjectives (A), verbs (V), and nouns (N). We introduce a simple yet effective framework for an automatic selection of class-specific context configurations. We construct a context configuration space based on universal dependency relations between words, and efficiently search this space with an adapted beam search algorithm. In word similarity tasks for each word class, we show that our framework is both effective and efficient. Particularly, it improves the Spearman’s ρ correlation with human scores on SimLex-999 over the best previously proposed class-specific contexts by 6 (A), 6 (V) and 5 (N) ρ points. With our selected context configurations, we train on only 14% (A), 26.2% (V), and 33.6% (N) of all dependency-based contexts, resulting in a reduced training time. Our results generalise: we show that the configurations our algorithm learns for one English training setup outperform previously proposed context types in another training setup for English. Moreover, basing the configuration space on universal dependencies, it is possible to transfer the learned configurations to German and Italian. We also demonstrate improved per-class results over other context types in these two languages.
UR - http://www.scopus.com/inward/record.url?scp=85041380732&partnerID=8YFLogxK
U2 - https://doi.org/10.18653/v1/k17-1013
DO - https://doi.org/10.18653/v1/k17-1013
M3 - منشور من مؤتمر
T3 - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
SP - 112
EP - 122
BT - CoNLL 2017 - 21st Conference on Computational Natural Language Learning, Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 21st Conference on Computational Natural Language Learning, CoNLL 2017
Y2 - 3 August 2017 through 4 August 2017
ER -