TY - GEN
T1 - The Adversarial Implications of Variable-Time Inference
AU - Biton, Dudi
AU - Misra, Aditi
AU - Levy, Efrat
AU - Kotak, Jaidip
AU - Bitton, Ron
AU - Schuster, Roei
AU - Papernot, Nicolas
AU - Elovici, Yuval
AU - Nassi, Ben
N1 - Publisher Copyright: © 2023 Owner/Author.
PY - 2023/11/30
Y1 - 2023/11/30
N2 - Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to query the model and observe its outputs (e.g., labels). In this work, we demonstrate, for the first time, the ability to enhance such decision-based attacks. To accomplish this, we present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack. The leakage of inference-state elements into algorithmic timing side channels has never been studied before, and we have found that it can contain rich information that facilitates superior timing attacks that significantly outperform attacks based solely on label outputs. In a case study, we investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors. In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks. We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference. Our experiments show that our adversarial examples exhibit superior perturbation quality compared to a decision-based attack. In addition, we present a new threat model in which dataset inference based solely on timing leakage is performed. To address the timing leakage vulnerability inherent in the NMS algorithm, we explore the potential and limitations of implementing constant-Time inference passes as a mitigation strategy.
AB - Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to query the model and observe its outputs (e.g., labels). In this work, we demonstrate, for the first time, the ability to enhance such decision-based attacks. To accomplish this, we present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack. The leakage of inference-state elements into algorithmic timing side channels has never been studied before, and we have found that it can contain rich information that facilitates superior timing attacks that significantly outperform attacks based solely on label outputs. In a case study, we investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors. In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks. We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference. Our experiments show that our adversarial examples exhibit superior perturbation quality compared to a decision-based attack. In addition, we present a new threat model in which dataset inference based solely on timing leakage is performed. To address the timing leakage vulnerability inherent in the NMS algorithm, we explore the potential and limitations of implementing constant-Time inference passes as a mitigation strategy.
KW - adversarial attacks
KW - adversarial machine learning
KW - nms algorithm
KW - privacy
KW - side-channel attacks
UR - http://www.scopus.com/inward/record.url?scp=85179584891&partnerID=8YFLogxK
U2 - 10.1145/3605764.3623912
DO - 10.1145/3605764.3623912
M3 - Conference contribution
T3 - AISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
SP - 103
EP - 114
BT - AISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
T2 - 16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, co-located with CCS 2023
Y2 - 30 November 2023
ER -