Planting Undetectable Backdoors in Machine Learning Models: [Extended Abstract]

Shafi Goldwasser, Michael P. Kim, Vinod Vaikuntanathan, Or Zamir

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate 'backdoor key,' the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input - a property we call non-replicability. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is 'clean' or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an 'adversarially-robust' learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Original languageEnglish
Title of host publicationProceedings - 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science, FOCS 2022
PublisherIEEE Computer Society
Pages931-942
Number of pages12
ISBN (Electronic)9781665455190
DOIs
StatePublished - 2022
Externally publishedYes
Event63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022 - Denver, United States
Duration: 31 Oct 20223 Nov 2022

Publication series

NameProceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
Volume2022-October

Conference

Conference63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022
Country/TerritoryUnited States
CityDenver
Period31/10/223/11/22

Keywords

  • cryptography
  • machine learning

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Planting Undetectable Backdoors in Machine Learning Models: [Extended Abstract]'. Together they form a unique fingerprint.

Cite this