TY - GEN
T1 - Topics to avoid
T2 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019
AU - Kumar, Sachin
AU - Wintner, Shuly
AU - Smith, Noah A.
AU - Tsvetkov, Yulia
N1 - Funding Information: The authors acknowledge helpful input from the anonymous reviewers. This work was supported in part by NSF grants IIS-1812327 and IIS-1813153, by grant no. 2017699 from the United States-Israel Binational Science Foundation (BSF), and by grant no. LU 856/13-1 from the Deutsche Forschungsgemeinschaft. Finally, the authors also thank Anjalie Field, Biswajit Paria, Ella Rabinovich, and Gili Goldin for helpful discussions. Publisher Copyright: © 2019 Association for Computational Linguistics
PY - 2020
Y1 - 2020
N2 - Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize well. In this work, we observe this limitation with respect to the task of native language identification. We find that standard text classifiers which perform well on the test set end up learning topical features which are confounds of the prediction task (e.g., if the input text mentions Sweden, the classifier predicts that the author's native language is Swedish). We propose a method that represents the latent topical confounds and a model which “unlearns” confounding features by predicting both the label of the input text and the confound; but we train the two predictors adversarially in an alternating fashion to learn a text representation that predicts the correct label but is less prone to using information about the confound. We show that this model generalizes better and learns features that are indicative of the writing style rather than the content.
AB - Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize well. In this work, we observe this limitation with respect to the task of native language identification. We find that standard text classifiers which perform well on the test set end up learning topical features which are confounds of the prediction task (e.g., if the input text mentions Sweden, the classifier predicts that the author's native language is Swedish). We propose a method that represents the latent topical confounds and a model which “unlearns” confounding features by predicting both the label of the input text and the confound; but we train the two predictors adversarially in an alternating fashion to learn a text representation that predicts the correct label but is less prone to using information about the confound. We show that this model generalizes better and learns features that are indicative of the writing style rather than the content.
UR - http://www.scopus.com/inward/record.url?scp=85084296995&partnerID=8YFLogxK
M3 - Conference contribution
T3 - EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference
SP - 4153
EP - 4163
BT - EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference
Y2 - 3 November 2019 through 7 November 2019
ER -