On the Relationship Between Universal Adversarial Attacks and Sparse Representations

Dana Weitzner, Raja Giryes

Research output: Contribution to journalArticlepeer-review

Abstract

The prominent success of neural networks, mainly in computer vision tasks, is increasingly shadowed by their sensitivity to small, barely perceivable adversarial perturbations in image input. In this article, we aim at explaining this vulnerability through the framework of sparsity. We show the connection between adversarial attacks and sparse representations, with a focus on explaining the universality and transferability of adversarial examples in neural networks. To this end, we show that sparse coding algorithms, and the neural network-based learned iterative shrinkage thresholding algorithm (LISTA) among them, suffer from this sensitivity, and that common attacks on neural networks can be expressed as attacks on the sparse representation of the input image. The phenomenon that we observe holds true also when the network is agnostic to the sparse representation and dictionary, and thus can provide a possible explanation for the universality and transferability of adversarial attacks.

Original languageEnglish
Pages (from-to)99-107
Number of pages9
JournalIEEE Open Journal of Signal Processing
Volume4
DOIs
StatePublished - 2023

Keywords

  • Adversarial robustness
  • deep neural networks
  • sparse representations

All Science Journal Classification (ASJC) codes

  • Signal Processing

Fingerprint

Dive into the research topics of 'On the Relationship Between Universal Adversarial Attacks and Sparse Representations'. Together they form a unique fingerprint.

Cite this