Abstract
In recent years, Neural Networks (NNs) have become widely popular for the execution of different machine learning algorithms. Training an NN is computationally intensive since it requires numerous multiplications of matrices that represent synaptic weights. It is therefore appealing to build a hardware-based NN accelerator to gain parallelism and efficient computation. Recently, we have proposed a compact circuit of a non-volatile synaptic weight based on two CMOS transistors and a memristor. In this paper, we present a fully analog NN design based on our previously proposed synapse with a full design of the different layers and their supporting CMOS circuits. We show that the presented NN significantly reduces the area as compared to a CMOS-based NN, while executing online gradient training with similar accuracy and computational speed improvement as a software implementation.
| Original language | English |
|---|---|
| Title of host publication | 2016 IEEE International Symposium on Circuits and Systems, ISCAS 2016 |
| Pages | 1394-1397 |
| Number of pages | 4 |
| DOIs | |
| State | Published - 29 Jul 2016 |
| Event | 2016 IEEE International Symposium on Circuits and Systems, ISCAS 2016 - Montreal, Canada Duration: 22 May 2016 → 25 May 2016 |
Conference
| Conference | 2016 IEEE International Symposium on Circuits and Systems, ISCAS 2016 |
|---|---|
| Country/Territory | Canada |
| City | Montreal |
| Period | 22/05/16 → 25/05/16 |
Keywords
- CMOS
- Multilayer Neural Networks
- RRAM
- backpropagation
- machine learning
- memristor
- neuromorphic
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering