The Adversarial Implications of Variable-Time Inference

Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Machine learning (ML) models are known to be vulnerable to a number of attacks that target the integrity of their predictions or the privacy of their training data. To carry out these attacks, a black-box adversary must typically possess the ability to query the model and observe its outputs (e.g., labels). In this work, we demonstrate, for the first time, the ability to enhance such decision-based attacks. To accomplish this, we present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack. The leakage of inference-state elements into algorithmic timing side channels has never been studied before, and we have found that it can contain rich information that facilitates superior timing attacks that significantly outperform attacks based solely on label outputs. In a case study, we investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors. In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks. We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference. Our experiments show that our adversarial examples exhibit superior perturbation quality compared to a decision-based attack. In addition, we present a new threat model in which dataset inference based solely on timing leakage is performed. To address the timing leakage vulnerability inherent in the NMS algorithm, we explore the potential and limitations of implementing constant-Time inference passes as a mitigation strategy.

Original languageAmerican English
Title of host publicationAISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
Pages103-114
Number of pages12
ISBN (Electronic)9798400702600
DOIs
StatePublished - 30 Nov 2023
Event16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, co-located with CCS 2023 - Copenhagen, Denmark
Duration: 30 Nov 2023 → …

Publication series

NameAISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security

Conference

Conference16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, co-located with CCS 2023
Country/TerritoryDenmark
CityCopenhagen
Period30/11/23 → …

Keywords

  • adversarial attacks
  • adversarial machine learning
  • nms algorithm
  • privacy
  • side-channel attacks

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Software

Fingerprint

Dive into the research topics of 'The Adversarial Implications of Variable-Time Inference'. Together they form a unique fingerprint.

Cite this