Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces

Odelia Melamed, Gilad Yehudai, Gal Vardi

Research output: Contribution to journalConference articlepeer-review

Abstract

Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples. In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace. We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial L2-perturbations in these directions. Moreover, we show that decreasing the initialization scale of the training algorithm, or adding L2 regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data.

Original languageEnglish
Pages (from-to)5028-5049
Number of pages22
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this