Paradoxes in fair computer-aided decision making

Andrew Morgan, Rafael Pass

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Computer-aided decision making-where a human decision-maker is aided by a computational classifier in making a decision-is becoming increasingly prevalent. For instance, judges in at least nine states make use of algorithmic tools meant to determine “recidivism risk scores” for criminal defendants in sentencing, parole, or bail decisions. A subject of much recent debate is whether such algorithmic tools are “fair” in the sense that they do not discriminate against certain groups (e.g., races) of people. Our main result shows that for “non-trivial” computer-aided decision making, either the classifier must be discriminatory, or a rational decision-maker using the output of the classifier is forced to be discriminatory. We further provide a complete characterization of situations where fair computer-aided decision making is possible.

Original languageEnglish
Title of host publicationAIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
Pages85-90
Number of pages6
ISBN (Electronic)9781450363242
DOIs
StatePublished - 27 Jan 2019
Externally publishedYes
Event2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019 - Honolulu, United States
Duration: 27 Jan 201928 Jan 2019

Publication series

NameAIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society

Conference

Conference2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019
Country/TerritoryUnited States
CityHonolulu
Period27/01/1928/01/19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Cite this