Gradual training method for denoising auto encoders

Alexander Kalmanovich, Gal Chechik

Research output: Contribution to conferencePaperpeer-review

Abstract

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.

Original languageEnglish
StatePublished - 2015
Event3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States
Duration: 7 May 20159 May 2015

Conference

Conference3rd International Conference on Learning Representations, ICLR 2015
Country/TerritoryUnited States
CitySan Diego
Period7/05/159/05/15

All Science Journal Classification (ASJC) codes

  • Education
  • Linguistics and Language
  • Language and Linguistics
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Gradual training method for denoising auto encoders'. Together they form a unique fingerprint.

Cite this