How Much Training Data Is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation

Koren Abitbul, Yehuda Dar

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Overparameterized autoencoder models often memorize their training data. For image data, memorization is often examined by using the trained autoencoder to recover missing regions in its training images (that were used only in their complete forms in the training). In this paper, we propose an inverse problem perspective for the study of memorization. Given a degraded training image, we define the recovery of the original training image as an inverse problem and formulate it as an optimization task. In our inverse problem, we use the trained autoencoder to implicitly define a regularizer for the particular training dataset that we aim to retrieve from. We develop the intricate optimization task into a practical method that iteratively applies the trained autoencoder and relatively simple computations that estimate and address the unknown degradation operator. We evaluate our method for blind inpainting where the goal is to recover training images from degradation of many missing pixels in an unknown pattern. We examine various deep autoencoder architectures, such as fully connected and U-Net (with various nonlinearities and at diverse train loss values), and show that our method significantly outperforms previous memorization-evaluation methods that recover training data from autoencoders. Importantly, our method greatly improves the recovery performance also in settings that were previously considered highly challenging, and even impractical, for such recovery and memorization evaluation.
Original languageAmerican English
Title of host publicationMachine Learning and Knowledge Discovery in Databases. Research Track
Subtitle of host publicationEuropean Conference, ECML PKDD 2024, Vilnius, Lithuania, September 9–13, 2024, Proceedings, Part II
EditorsA Bifet, J Davis, T Krilavicius, M Kull, E Ntoutsi, I Zliobaite
Pages321-339
Number of pages19
ISBN (Electronic)9783031703447
DOIs
StatePublished - Aug 2024

Keywords

  • Inverse problems
  • Memorization
  • Overparameterized autoencoders
  • Plug-and-play priors

Cite this