Memory-efficient Transformers via Top-k Attention

Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these variants are memory and compute efficient, it is not possible to directly use them with popular pre-trained language models trained using vanilla attention, without an expensive corrective pre-training stage. In this work, we propose a simple yet highly accurate approximation for vanilla attention. We process the queries in chunks, and for each query, compute the top-k scores with respect to the keys. Our approach offers several advantages: (a) its memory usage is linear in the input size, similar to linear attention variants, such as Performer and RFA (b) it is a drop-in replacement for vanilla attention that does not require any corrective pre-training, and (c) it can also lead to significant memory savings in the feed-forward layers after casting them into the familiar query-key-value framework. We evaluate the quality of top-k approximation for multi-head attention layers on the Long Range Arena Benchmark, and for feed-forward layers of T5 and UnifiedQA on multiple QA datasets. We show our approach leads to accuracy that is nearly-identical to vanilla attention in multiple setups including training from scratch, fine-tuning, and zero-shot inference.

Original languageEnglish
Title of host publicationSustaiNLP 2021 - 2nd Workshop on Simple and Efficient Natural Language Processing, Proceedings of SustaiNLP
EditorsNafise Sadat Moosavi, Iryna Gurevych, Angela Fan, Thomas Wolf, Yufang Hou, Ana Marasovic, Sujith Ravi
PublisherAssociation for Computational Linguistics (ACL)
Pages39-52
Number of pages14
ISBN (Electronic)9781955917018
StatePublished - 2021
Event2nd Workshop on Simple and Efficient Natural Language Processing, SustaiNLP 2021 - Virtual, Online
Duration: 10 Nov 2021 → …

Publication series

NameSustaiNLP 2021 - 2nd Workshop on Simple and Efficient Natural Language Processing, Proceedings of SustaiNLP

Conference

Conference2nd Workshop on Simple and Efficient Natural Language Processing, SustaiNLP 2021
CityVirtual, Online
Period10/11/21 → …

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Software
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Memory-efficient Transformers via Top-k Attention'. Together they form a unique fingerprint.

Cite this