TY - GEN
T1 - Scalable attentive sentence-pair modeling via distilled sentence embedding
AU - Barkan, Oren
AU - Razin, Noam
AU - Malkiel, Itzik
AU - Katz, Ori
AU - Caciularu, Avi
AU - Koenigstein, Noam
N1 - Publisher Copyright: © 2020, Association for the Advancement of Artificial Intelli-gence.
PY - 2020
Y1 - 2020
N2 - Recent state-of-the-art natural language understanding mod-els, such as BERT and XLNet, score a pair of sentences (A and B) using multiple cross-attention operations - a process in which each word in sentence A attends to all words in sentence B and vice versa. As a result, computing the simi-larity between a query sentence and a set of candidate sen-tences, requires the propagation of all query-candidate sen-tence-pairs throughout a stack of cross-attention layers. This exhaustive process becomes computationally prohibitive when the number of candidate sentences is large. In con-trast, sentence embedding techniques learn a sentence-to-vector mapping and compute the similarity between the sen-tence vectors via simple elementary operations. In this pa-per, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks. The out-line of DSE is as follows: Given a cross-attentive teacher model (e.g. a fine-tuned BERT), we train a sentence embed-ding based student model to reconstruct the sentence-pair scores obtained by the teacher model. We empirically demonstrate the effectiveness of DSE on five GLUE sen-tence-pair tasks. DSE significantly outperforms several ELMO variants and other sentence embedding methods, while accelerating computation of the query-candidate sen-tence-pairs similarities by several orders of magnitude, with an average relative degradation of 4.6% compared to BERT. Furthermore, we show that DSE produces sentence embed-dings that reach state-of-the-art performance on universal sentence representation benchmarks. Our code is made pub-licly available at https://github.com/microsoft/Distilled-Sentence-Embedding.
AB - Recent state-of-the-art natural language understanding mod-els, such as BERT and XLNet, score a pair of sentences (A and B) using multiple cross-attention operations - a process in which each word in sentence A attends to all words in sentence B and vice versa. As a result, computing the simi-larity between a query sentence and a set of candidate sen-tences, requires the propagation of all query-candidate sen-tence-pairs throughout a stack of cross-attention layers. This exhaustive process becomes computationally prohibitive when the number of candidate sentences is large. In con-trast, sentence embedding techniques learn a sentence-to-vector mapping and compute the similarity between the sen-tence vectors via simple elementary operations. In this pa-per, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks. The out-line of DSE is as follows: Given a cross-attentive teacher model (e.g. a fine-tuned BERT), we train a sentence embed-ding based student model to reconstruct the sentence-pair scores obtained by the teacher model. We empirically demonstrate the effectiveness of DSE on five GLUE sen-tence-pair tasks. DSE significantly outperforms several ELMO variants and other sentence embedding methods, while accelerating computation of the query-candidate sen-tence-pairs similarities by several orders of magnitude, with an average relative degradation of 4.6% compared to BERT. Furthermore, we show that DSE produces sentence embed-dings that reach state-of-the-art performance on universal sentence representation benchmarks. Our code is made pub-licly available at https://github.com/microsoft/Distilled-Sentence-Embedding.
UR - http://www.scopus.com/inward/record.url?scp=85089238392&partnerID=8YFLogxK
M3 - منشور من مؤتمر
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 3235
EP - 3242
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
Y2 - 7 February 2020 through 12 February 2020
ER -