TY - GEN
T1 - FEPC
T2 - 26th International Conference on Pattern Recognition, ICPR 2022
AU - Giloni, Amit
AU - Grolman, Edita
AU - Elovici, Yuval
AU - Shabtai, Asaf
N1 - Publisher Copyright: © 2022 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - A machine learning (ML) fairness estimator, which is used to assess an ML model's fairness, should satisfy several conditions when used in real-life settings. Specifically, it should: i) support a comprehensive fairness evaluation that explores all ethical aspects; ii) be flexible and support different ML model settings; iii) enable comparison between different evaluations and ML models; and iv) provide reasoning and explanations for the fairness assessments produced. Existing methods do not sufficiently satisfy all of the above conditions. In this paper, we present FEPC (Fairness Estimation using Prototypes and Critics for tabular data), a novel method for fairness assessment that provides explanations and reasoning for its assessments by using an adversarial attack and customized fairness measurement. Given an ML model and data records, FEPC performs a comprehensive fairness evaluation and produces a fairness assessment for each examined feature. FEPC was evaluated using two benchmark datasets (ProPublica COMPAS and Statlog datasets) and a synthetic dataset containing two features, one of which is biased and one of which is fair, and compared to existing fairness assessment methods. The evaluation demonstrates that FEPC satisfies all of the conditions, making it suitable for real-life settings, and outperforms existing methods.
AB - A machine learning (ML) fairness estimator, which is used to assess an ML model's fairness, should satisfy several conditions when used in real-life settings. Specifically, it should: i) support a comprehensive fairness evaluation that explores all ethical aspects; ii) be flexible and support different ML model settings; iii) enable comparison between different evaluations and ML models; and iv) provide reasoning and explanations for the fairness assessments produced. Existing methods do not sufficiently satisfy all of the above conditions. In this paper, we present FEPC (Fairness Estimation using Prototypes and Critics for tabular data), a novel method for fairness assessment that provides explanations and reasoning for its assessments by using an adversarial attack and customized fairness measurement. Given an ML model and data records, FEPC performs a comprehensive fairness evaluation and produces a fairness assessment for each examined feature. FEPC was evaluated using two benchmark datasets (ProPublica COMPAS and Statlog datasets) and a synthetic dataset containing two features, one of which is biased and one of which is fair, and compared to existing fairness assessment methods. The evaluation demonstrates that FEPC satisfies all of the conditions, making it suitable for real-life settings, and outperforms existing methods.
UR - http://www.scopus.com/inward/record.url?scp=85143636957&partnerID=8YFLogxK
U2 - 10.1109/ICPR56361.2022.9956582
DO - 10.1109/ICPR56361.2022.9956582
M3 - Conference contribution
T3 - Proceedings - International Conference on Pattern Recognition
SP - 4877
EP - 4884
BT - 2022 26th International Conference on Pattern Recognition, ICPR 2022
Y2 - 21 August 2022 through 25 August 2022
ER -