Adversarial robustness via noise injection in smoothed models

Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Alex M. Bronstein, Avi Mendelson

Research output: Contribution to journalArticlepeer-review

Abstract

Deep neural networks are known to be vulnerable to malicious perturbations. Current methods for improving adversarial robustness make use of either implicit or explicit regularization, with the latter is usually based on adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee a classifier’s performance subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing to improve performance on unperturbed data and increase robustness to adversarial attacks. We propose to combine smoothing along with adversarial training and randomization approaches, and find that doing so significantly improves the resilience compared to the baseline. We examine our method’s performance on common white-box (FGSM, PGD) and black-box (transferable attack and NAttack) attacks on CIFAR-10 and CIFAR-100, and determine that for a low number of iterations, smoothing provides a significant performance boost that persists even for perturbations with a high attack norm, ϵ. For example, under a PGD-10 attack on CIFAR-10 using Wide-ResNet28-4, we achieve 60.3% accuracy for infinity norm ϵ= 8 / 255 and 13.1% accuracy for ϵ= 35 / 255 – outperforming previous art by 3% and 6%, respectively. We achieve nearly twice the accuracy on ϵ= 35 / 255 and even more so for perturbations with higher infinity norm. A https://github.com/yanemcovsky/SIAM of the proposed method is provided.

Original languageEnglish
Pages (from-to)9483-9498
Number of pages16
JournalApplied Intelligence
Volume53
Issue number8
DOIs
StatePublished - 1 Apr 2023

Keywords

  • Adversarial examples
  • Adversarial robustness
  • Computer vision
  • Neural networks
  • Noise injection
  • Randomized smoothing

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Adversarial robustness via noise injection in smoothed models'. Together they form a unique fingerprint.

Cite this