Protecting the Protected Group: Circumventing Harmful Fairness

Omer Ben-Porat, Fedor Sandomirskiy, Moshe Tennenholtz

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The recent literature on fair Machine Learning manifests that the choice of fairness constraints must be driven by the utilities of the population. However, virtually all previous work makes the unrealistic assumption that the exact underlying utilities of the population (representing private tastes of individuals) are known to the regulator that imposes the fairness constraint. In this paper we initiate the discussion of the mismatch, the unavoidable difference between the underlying utilities of the population and the utilities assumed by the regulator. We demonstrate that the mismatch can make the disadvantaged protected group worse off after imposing the fairness constraint and provide tools to design fairness constraints that help the disadvantaged group despite the mismatch.

Original languageEnglish
Title of host publication35th AAAI Conference on Artificial Intelligence, AAAI 2021
Pages5176-5184
Number of pages9
ISBN (Electronic)9781713835974
DOIs
StatePublished - 2021
Event35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Duration: 2 Feb 20219 Feb 2021

Publication series

Name35th AAAI Conference on Artificial Intelligence, AAAI 2021
Volume6A

Conference

Conference35th AAAI Conference on Artificial Intelligence, AAAI 2021
CityVirtual, Online
Period2/02/219/02/21

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Protecting the Protected Group: Circumventing Harmful Fairness'. Together they form a unique fingerprint.

Cite this