Supporting the Momentum Training Algorithm Using a Memristor-Based Synapse

Tzofnat Greenberg-Toledo, Roee Mazor, Ameer Haj-Ali, Shahar Kvatinsky

Research output: Contribution to journalArticlepeer-review

Abstract

Despite the increasing popularity of deep neural networks (DNNs), they cannot be trained efficiently on existing platforms, and efforts have thus been devoted to designing dedicated hardware for DNNs. In our recent work, we have provided direct support for the stochastic gradient descent (SGD) training algorithm by constructing the basic element of neural networks, the synapse, using emerging technologies, namely memristors. Due to the limited performance of SGD, optimization algorithms are commonly employed in DNN training. Therefore, DNN accelerators that only support SGD might not meet DNN training requirements. In this paper, we present a memristor-based synapse that supports the commonly used momentum algorithm. Momentum significantly improves the convergence of SGD and facilitates the DNN training stage. We propose two design approaches to support momentum: 1) a hardware friendly modification of the momentum algorithm using memory external to the synapse structure, and 2) updating each synapse with a built-in memory. Our simulations show that the proposed DNN training solutions are as accurate as training on a GPU platform while speeding up the performance by 886 × and decreasing energy consumption by 7 ×, on average.

Original languageEnglish
Article number8600725
Pages (from-to)1571-1583
Number of pages13
JournalIEEE Transactions on Circuits and Systems I: Regular Papers
Volume66
Issue number4
DOIs
StatePublished - Apr 2019

Keywords

  • Memristor
  • VTEAM
  • deep neural networks
  • hardware
  • momentum
  • stochastic gradient descent
  • synapse
  • training

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this