Abstract
Whereas learning is one of the primary goals of Explainable Artificial Intelligence (XAI), we know little about whether, how, and when explanations enhance users’ learning from feedback provided by Artificial Intelligence (AI). Drawing on Feedback Theory as a fundamental theoretical lens, we formulate a research model wherein explanations enhance informativeness and task performance, contingent on users’ prior knowledge, ultimately leading to a higher learning outcome. This research model is tested in a randomized between-subjects online experiment with 573 participants whose task is to match Google Street View pictures to their city of origin. We find a positive effect of explanations on learning outcome, which is fully mediated by informativeness, for users with less prior knowledge. Furthermore, we find that explanations positively impact users’ task performance, where this effect is direct for more knowledgeable users and fully mediated by informativeness for less knowledgeable users. We seek to elucidate the mechanisms underlying these effects of explanations on learning from AI feedback in focus groups with AI experts and users. By studying the consequences of explanations as part of AI feedback for users in non-routine inference tasks, we advance the understanding of explanations as facilitators of human learning from AI systems.
Original language | American English |
---|---|
Pages (from-to) | 323-345 |
Number of pages | 23 |
Journal | European Journal of Information Systems |
Volume | 34 |
Issue number | 2 |
DOIs | |
State | Published - 1 Jan 2025 |
Keywords
- AI feedback
- Explainable Artificial Intelligence
- XAI
- feedback theory
- informativeness
- learning outcome
All Science Journal Classification (ASJC) codes
- Management Information Systems
- Information Systems
- Library and Information Sciences
- Information Systems and Management