AIDA: Associative In-Memory Deep Learning Accelerator

Esteban Garzon, Adam Teman, Marco Lanuzza, Leonid Yavits

Research output: Contribution to journalArticlepeer-review

Abstract

This work presents an associative in-memory deep learning processor (AIDA) for edge devices. An associative processor is a massively parallel non-von Neumann accelerator that uses memory cells for computing; the bulk of data is never transferred outside the memory arrays for external processing. AIDA utilizes a dynamic content addressable memory for both data storage and processing, and benefits from sparsity and limited arithmetic precision, typical in modern deep neural networks. The novel in-data processing implementation designed for the AIDA accelerator achieves a speedup of 270× over an advanced central processing unit at more than three orders-of-magnitude better energy efficiency.

Original languageAmerican English
Pages (from-to)67-75
Number of pages9
JournalIEEE Micro
Volume42
Issue number6
DOIs
StatePublished - 1 Jan 2022

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this