TY - GEN
T1 - Harmful Bias
T2 - 16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024
AU - Gat, Nadav
AU - Sharif, Mahmood
N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).
PY - 2024/11/22
Y1 - 2024/11/22
N2 - Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., medical records). However, recent work has shown that FL does not guarantee privacy—in classification tasks, the training-data labels, and even the inputs, may be reconstructed from information users share during training. Using an analytic derivation, our work offers a new label-extraction attack called Label Leakage from Bias Gradients (LLBG). Compared to prior work, ours makes fewer assumptions and applies to a broader range of classical and modern deep learning models, regardless of their non-linear activation functions. Crucially, through experiments with two datasets, nine model architectures, and a wide variety of attack scenarios (e.g., with and without defenses), we found that LLBG outperformed prior attacks in almost all settings explored, pushing the boundaries of label-extraction attacks.
AB - Federated learning (FL) enables several users to train machine-learning models jointly without explicitly sharing data with one another. This regime is particularly helpful in cases where keeping the data private and secure is essential (e.g., medical records). However, recent work has shown that FL does not guarantee privacy—in classification tasks, the training-data labels, and even the inputs, may be reconstructed from information users share during training. Using an analytic derivation, our work offers a new label-extraction attack called Label Leakage from Bias Gradients (LLBG). Compared to prior work, ours makes fewer assumptions and applies to a broader range of classical and modern deep learning models, regardless of their non-linear activation functions. Crucially, through experiments with two datasets, nine model architectures, and a wide variety of attack scenarios (e.g., with and without defenses), we found that LLBG outperformed prior attacks in almost all settings explored, pushing the boundaries of label-extraction attacks.
KW - Federated Learning
KW - Gradient Leakage
KW - Privacy Attacks
UR - http://www.scopus.com/inward/record.url?scp=85216560152&partnerID=8YFLogxK
U2 - https://doi.org/10.1145/3689932.3694768
DO - https://doi.org/10.1145/3689932.3694768
M3 - منشور من مؤتمر
T3 - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024
SP - 31
EP - 41
BT - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with
Y2 - 14 October 2024 through 18 October 2024
ER -