Abstract
Despite their impressive performance, deep convolutional neural networks (CNN) have been shown to be sensitive to small adversarial perturbations. These nuisances, which one can barely notice, are powerful enough to fool sophisticated and well performing classifiers, leading to ridiculous misclassification results. In this paper, we analyze the stability of state-of-the-art deep learning classification machines to adversarial perturbations, where we assume that the signals belong to the (possibly multilayer) sparse representation model. We start with convolutional sparsity and then proceed to its multilayered version, which is tightly connected to CNN. Our analysis links between the stability of the classification to noise and the underlying structure of the signal, quantified by the sparsity of its representation under a fixed dictionary. In addition, we offer similar stability theorems for two practical pursuit algorithms, which are posed as two different deep learning architectures—the layered thresholding and the layered basis pursuit. Our analysis establishes the better robustness of the later to adversarial attacks. We corroborate these theoretical results by numerical experiments on three datasets: MNIST, CIFAR-10 and CIFAR-100.
| Original language | English |
|---|---|
| Pages (from-to) | 313-327 |
| Number of pages | 15 |
| Journal | Journal of Mathematical Imaging and Vision |
| Volume | 62 |
| Issue number | 3 |
| DOIs | |
| State | Published - 1 Apr 2020 |
Keywords
- Adversarial noise
- Sparse coding
- Theory for deep learning
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Modelling and Simulation
- Condensed Matter Physics
- Computer Vision and Pattern Recognition
- Geometry and Topology
- Applied Mathematics