Lost in Translation: The Limits of Explainability in AI

Hofit Wasserman-Rozen, Ran Gilad-Bachrach, Niva Elkin-Koren

Research output: Contribution to journalArticlepeer-review

Abstract

As artificial intelligence becomes more prevalent, regulators are increasingly turning to legal measures, like “a right to explanation” to protect against potential risks raised by AI systems. However, are eXplainable AI (XAI) tools - the artificial intelligence tools that provide such explanations – up for the task?

This paper critically examines XAI’s potential to facilitate the right to explanation by applying the prism of explanation’s role in law to different stakeholders. Inspecting the underlying functions of reason-giving reveals different objectives for each of the stakeholders involved. From the perspective of a decision-subject, reason-giving facilitates due process and acknowledges human agency. From a decision-maker’s perspective, reason-giving contributes to improving the quality of the decisions themselves. From an ecosystem perspective, reason-giving may strengthen the authority of the decision-making system toward different stakeholders by promoting accountability and legitimacy, and by providing better guidance. Applying this analytical framework to XAI’s generated explanations reveals that XAI fails to fulfill the underlying objectives of the right to explanation from the perspective of both the decision-subject and the decision-maker. In contrast, XAI is found to be extremely well-suited to fulfil the underlying functions of reason-giving from an ecosystems’ perspective, namely, strengthening the authority of the decision-making system. However, lacking all other virtues, this isolated ability may be misused or abused, eventually harming XAI’s intended human audience. The disparity between human decision-making and automated decisions makes XAI an insufficient and even a risky tool, rather than serving as a guardian of human rights. After conducting a rigorous analysis of these ramifications, this paper concludes by urging regulators and the XAI community to reconsider the pursuit of explainability and the right to explanation of AI systems.
Original languageEnglish
Pages (from-to)391-438
Number of pages48
JournalCardozo Arts & Entertainment Law Journal
Volume42
Issue number2
StatePublished - 2024

Keywords

  • AI
  • Explainability
  • Law and Technology
  • Legal Decision-Making
  • XAI

Cite this