TY - GEN
T1 - The Ultimate Combo
T2 - 16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024
AU - Yun, Zebin
AU - Weingarten, Achi Or
AU - Ronen, Eyal
AU - Sharif, Mahmood
N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).
PY - 2024/11/22
Y1 - 2024/11/22
N2 - To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited augmentations and their composition. To fill the gap, we systematically studied how data augmentation affects transferability. Specifically, we explored 46 augmentation techniques originally proposed to help ML models generalize to unseen benign samples, and assessed how they impact transferability, when applied individually or composed. Performing exhaustive search on a small subset of augmentation techniques and genetic search on all techniques, we identified augmentation combinations that help promote transferability. Extensive experiments with the ImageNet and CIFAR-10 datasets and 18 models showed that simple color-space augmentations (e.g., color to greyscale) attain high transferability when combined with standard augmentations. Furthermore, we discovered that composing augmentations impacts transferability mostly monotonically (i.e., more augmentations → ≥transferability). We also found that the best composition significantly outperformed the state of the art (e.g., 91.8% vs. ≤82.5% average transferability to adversarially trained targets on ImageNet). Lastly, our theoretical analysis, backed by empirical evidence, intuitively explains why certain augmentations promote transferability.
AB - To help adversarial examples generalize from surrogate machine-learning (ML) models to targets, certain transferability-based black-box evasion attacks incorporate data augmentations (e.g., random resizing). Yet, prior work has explored limited augmentations and their composition. To fill the gap, we systematically studied how data augmentation affects transferability. Specifically, we explored 46 augmentation techniques originally proposed to help ML models generalize to unseen benign samples, and assessed how they impact transferability, when applied individually or composed. Performing exhaustive search on a small subset of augmentation techniques and genetic search on all techniques, we identified augmentation combinations that help promote transferability. Extensive experiments with the ImageNet and CIFAR-10 datasets and 18 models showed that simple color-space augmentations (e.g., color to greyscale) attain high transferability when combined with standard augmentations. Furthermore, we discovered that composing augmentations impacts transferability mostly monotonically (i.e., more augmentations → ≥transferability). We also found that the best composition significantly outperformed the state of the art (e.g., 91.8% vs. ≤82.5% average transferability to adversarially trained targets on ImageNet). Lastly, our theoretical analysis, backed by empirical evidence, intuitively explains why certain augmentations promote transferability.
KW - Adversarial Examples
KW - Neural Networks
KW - Transferability
UR - http://www.scopus.com/inward/record.url?scp=85216515177&partnerID=8YFLogxK
U2 - https://doi.org/10.1145/3689932.3694769
DO - https://doi.org/10.1145/3689932.3694769
M3 - منشور من مؤتمر
T3 - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024
SP - 113
EP - 125
BT - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with
Y2 - 14 October 2024 through 18 October 2024
ER -