Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients

Nadav Gat, Mahmood Sharif

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., medical records). However, recent work has shown that FL does not guarantee privacy—in classification tasks, the training-data labels, and even the inputs, may be reconstructed from information users share during training. Using an analytic derivation, our work offers a new label-extraction attack called Label Leakage from Bias Gradients (LLBG). Compared to prior work, ours makes fewer assumptions and applies to a broader range of classical and modern deep learning models, regardless of their non-linear activation functions. Crucially, through experiments with two datasets, nine model architectures, and a wide variety of attack scenarios (e.g., with and without defenses), we found that LLBG outperformed prior attacks in almost all settings explored, pushing the boundaries of label-extraction attacks.

Original languageEnglish
Title of host publicationAISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with
Subtitle of host publicationCCS 2024
Pages31-41
Number of pages11
ISBN (Electronic)9798400712289
DOIs
StatePublished - 22 Nov 2024
Event16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024 - Salt Lake City, United States
Duration: 14 Oct 202418 Oct 2024

Publication series

NameAISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024

Conference

Conference16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024
Country/TerritoryUnited States
CitySalt Lake City
Period14/10/2418/10/24

Keywords

  • Federated Learning
  • Gradient Leakage
  • Privacy Attacks

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Networks and Communications
  • Software

Cite this