TY - GEN
T1 - Indistinguishable Predictions and Multi-group Fair Learning
AU - Rothblum, Guy N.
N1 - Publisher Copyright: © 2023, International Association for Cryptologic Research.
PY - 2023
Y1 - 2023
N2 - Prediction algorithms assign numbers to individuals that are popularly understood as individual “probabilities”—what is the probability that an applicant will repay a loan? Automated predictions increasingly form the basis for life-altering decisions, and this raises a host of concerns. Concerns about the fairness of the resulting predictions are particularly alarming: for example, the predictor might perform poorly on a protected minority group. We survey recent developments in formalizing and addressing such concerns. Inspired by the theory of computational indistinguishability, the recently proposed notion of Outcome Indistinguishability (OI) [Dwork et al., STOC 2021] requires that the predicted distribution of outcomes cannot be distinguished from the real-world distribution. Outcome Indistinguishability is a strong requirement for obtaining meaningful predictions. Happily, it can be obtained: techniques from the algorithmic fairness literature [Hebert-Johnson et al., ICML 2018] yield algorithms for learning OI predictors from real-world outcome data. Returning to the motivation of addressing fairness concerns, Outcome Indistinguishability can be used to provide robust and general guarantees for protected demographic groups [Rothblum and Yona, ICML 2021]. This gives algorithms that can learn a single predictor that “performs well” for every group in a given rich collection G of overlapping subgroups. Performance is measured using a loss function, which can be quite general and can itself incorporate fairness concerns.
AB - Prediction algorithms assign numbers to individuals that are popularly understood as individual “probabilities”—what is the probability that an applicant will repay a loan? Automated predictions increasingly form the basis for life-altering decisions, and this raises a host of concerns. Concerns about the fairness of the resulting predictions are particularly alarming: for example, the predictor might perform poorly on a protected minority group. We survey recent developments in formalizing and addressing such concerns. Inspired by the theory of computational indistinguishability, the recently proposed notion of Outcome Indistinguishability (OI) [Dwork et al., STOC 2021] requires that the predicted distribution of outcomes cannot be distinguished from the real-world distribution. Outcome Indistinguishability is a strong requirement for obtaining meaningful predictions. Happily, it can be obtained: techniques from the algorithmic fairness literature [Hebert-Johnson et al., ICML 2018] yield algorithms for learning OI predictors from real-world outcome data. Returning to the motivation of addressing fairness concerns, Outcome Indistinguishability can be used to provide robust and general guarantees for protected demographic groups [Rothblum and Yona, ICML 2021]. This gives algorithms that can learn a single predictor that “performs well” for every group in a given rich collection G of overlapping subgroups. Performance is measured using a loss function, which can be quite general and can itself incorporate fairness concerns.
UR - http://www.scopus.com/inward/record.url?scp=85161458773&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-30545-0_1
DO - 10.1007/978-3-031-30545-0_1
M3 - منشور من مؤتمر
SN - 9783031305443
VL - 14004
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 3
EP - 21
BT - Advances in Cryptology – EUROCRYPT 2023 - 42nd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Proceedings
A2 - Hazay, Carmit
A2 - Stam, Martijn
PB - Springer Science and Business Media B.V.
T2 - 42nd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Eurocrypt 2023
Y2 - 23 April 2023 through 27 April 2023
ER -