Pivotal Auto-Encoder via Self-Normalizing ReLU

Nelson Goldenstein, Jeremias Sulam, Yaniv Romano

Research output: Contribution to journalArticlepeer-review

Abstract

Sparse auto-encoders are useful for extracting low-dimensional representations from high-dimensional data. However, their performance degrades sharply when the input noise at test time differs from the noise employed during training. This limitation hinders the applicability of auto-encoders in real-world scenarios where the level of noise in the input is unpredictable. In this paper, we formalize single hidden layer sparse auto-encoders as a transform learning problem. Leveraging the transform modeling interpretation, we propose an optimization problem that leads to a predictive model invariant to the noise level at test time. In other words, the same pre-trained model is able to generalize to different noise levels. The proposed optimization algorithm, derived from the square root lasso, is translated into a new, computationally efficient auto-encoding architecture. After proving that our new method is invariant to the noise level, we evaluate our approach by training networks using the proposed architecture for denoising tasks. Our experimental results demonstrate that the trained models yield a significant improvement in stability against varying types of noise compared to commonly used architectures.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Signal Processing
DOIs
StateAccepted/In press - 2024

Keywords

  • Computational modeling
  • Encoding
  • Noise
  • Noise level
  • Sparse approximation
  • Sparse coding
  • Training
  • Transforms
  • sparse auto-encoders
  • square root lasso
  • transform learning

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Pivotal Auto-Encoder via Self-Normalizing ReLU'. Together they form a unique fingerprint.

Cite this