Multiscale Self Attentive Convolutions for Vision and Language Modeling.

Research output: Contribution to journalArticlepeer-review

Abstract

Self attention mechanisms have become a key building block in many state-of-the-art language understanding models. In this paper, we show that the self attention operator can be formulated in terms of 1x1 convolution operations. Following this observation, we propose several novel operators: First, we introduce a 2D version of self attention that is applicable for 2D signals such as images. Second, we present the 1D and 2D Self Attentive Convolutions (SAC) operator that generalizes self attention beyond 1x1 convolutions to 1xm and nxm convolutions, respectively. While 1D and 2D self attention operate on individual words and pixels, SAC operates on m-grams and image patches, respectively. Third, we present a multiscale version of SAC (MSAC) which analyzes the input by employing multiple SAC operators that vary by filter size, in parallel. Finally, we explain how MSAC can be utilized for vision and language modeling, and further harness MSAC to form a cross attentive image similarity machinery.
Original languageUndefined/Unknown
JournalCoRR
Volumeabs/1912.01521
StatePublished - 2019

Keywords

  • Computation and Language
  • cs.CL
  • Computer Vision and Pattern Recognition
  • cs.CV
  • Machine Learning
  • stat.ML

Cite this