TY - JOUR
T1 - Adversarial robustness via noise injection in smoothed models
AU - Nemcovsky, Yaniv
AU - Zheltonozhskii, Evgenii
AU - Baskin, Chaim
AU - Chmiel, Brian
AU - Bronstein, Alex M.
AU - Mendelson, Avi
N1 - Publisher Copyright: © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Deep neural networks are known to be vulnerable to malicious perturbations. Current methods for improving adversarial robustness make use of either implicit or explicit regularization, with the latter is usually based on adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee a classifier’s performance subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing to improve performance on unperturbed data and increase robustness to adversarial attacks. We propose to combine smoothing along with adversarial training and randomization approaches, and find that doing so significantly improves the resilience compared to the baseline. We examine our method’s performance on common white-box (FGSM, PGD) and black-box (transferable attack and NAttack) attacks on CIFAR-10 and CIFAR-100, and determine that for a low number of iterations, smoothing provides a significant performance boost that persists even for perturbations with a high attack norm, ϵ. For example, under a PGD-10 attack on CIFAR-10 using Wide-ResNet28-4, we achieve 60.3% accuracy for infinity norm ϵ∞= 8 / 255 and 13.1% accuracy for ϵ∞= 35 / 255 – outperforming previous art by 3% and 6%, respectively. We achieve nearly twice the accuracy on ϵ∞= 35 / 255 and even more so for perturbations with higher infinity norm. A https://github.com/yanemcovsky/SIAM of the proposed method is provided.
AB - Deep neural networks are known to be vulnerable to malicious perturbations. Current methods for improving adversarial robustness make use of either implicit or explicit regularization, with the latter is usually based on adversarial training. Randomized smoothing, the averaging of the classifier outputs over a random distribution centered in the sample, has been shown to guarantee a classifier’s performance subject to bounded perturbations of the input. In this work, we study the application of randomized smoothing to improve performance on unperturbed data and increase robustness to adversarial attacks. We propose to combine smoothing along with adversarial training and randomization approaches, and find that doing so significantly improves the resilience compared to the baseline. We examine our method’s performance on common white-box (FGSM, PGD) and black-box (transferable attack and NAttack) attacks on CIFAR-10 and CIFAR-100, and determine that for a low number of iterations, smoothing provides a significant performance boost that persists even for perturbations with a high attack norm, ϵ. For example, under a PGD-10 attack on CIFAR-10 using Wide-ResNet28-4, we achieve 60.3% accuracy for infinity norm ϵ∞= 8 / 255 and 13.1% accuracy for ϵ∞= 35 / 255 – outperforming previous art by 3% and 6%, respectively. We achieve nearly twice the accuracy on ϵ∞= 35 / 255 and even more so for perturbations with higher infinity norm. A https://github.com/yanemcovsky/SIAM of the proposed method is provided.
KW - Adversarial examples
KW - Adversarial robustness
KW - Computer vision
KW - Neural networks
KW - Noise injection
KW - Randomized smoothing
UR - http://www.scopus.com/inward/record.url?scp=85135841867&partnerID=8YFLogxK
U2 - 10.1007/s10489-022-03423-5
DO - 10.1007/s10489-022-03423-5
M3 - مقالة
SN - 0924-669X
VL - 53
SP - 9483
EP - 9498
JO - Applied Intelligence
JF - Applied Intelligence
IS - 8
ER -