Verification of Neural Networks’ Local Differential Classification Privacy

Roie Reshef, Anan Kabaha, Olga Seleznova, Dana Drachsler-Cohen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Neural networks are susceptible to privacy attacks. To date, no verifier can reason about the privacy of individuals participating in the training set. We propose a new privacy property, called local differential classification privacy (LDCP), extending local robustness to a differential privacy setting suitable for black-box classifiers. Given a neighborhood of inputs, a classifier is LDCP if it classifies all inputs the same regardless of whether it is trained with the full dataset or whether any single entry is omitted. A naive algorithm is highly impractical because it involves training a very large number of networks and verifying local robustness of the given neighborhood separately for every network. We propose Sphynx, an algorithm that computes an abstraction of all networks, with a high probability, from a small set of networks, and verifies LDCP directly on the abstract network. The challenge is twofold: network parameters do not adhere to a known distribution probability, making it difficult to predict an abstraction, and predicting too large abstraction harms the verification. Our key idea is to transform the parameters into a distribution given by KDE, allowing to keep the over-approximation error small. To verify LDCP, we extend a MILP verifier to analyze an abstract network. Experimental results show that by training only 7% of the networks, Sphynx predicts an abstract network obtaining 93 % verification accuracy and reducing the analysis time by 1.7 · 104 x.

Original languageEnglish
Title of host publicationVerification, Model Checking, and Abstract Interpretation - 25th International Conference, VMCAI 2024, Proceedings
EditorsRayna Dimitrova, Ori Lahav, Sebastian Wolff
PublisherSpringer Science and Business Media Deutschland GmbH
Pages98-123
Number of pages26
ISBN (Print)9783031505201
DOIs
StatePublished - 2024
Event25th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2024 was co-located with 51st ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2024 - London, United Kingdom
Duration: 15 Jan 202416 Jan 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14500 LNCS

Conference

Conference25th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2024 was co-located with 51st ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2024
Country/TerritoryUnited Kingdom
CityLondon
Period15/01/2416/01/24

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Verification of Neural Networks’ Local Differential Classification Privacy'. Together they form a unique fingerprint.

Cite this