Abstract
Human-AI collaboration has become common, integrating highly complex AI systems into the workplace. Still, it is often ineffective; impaired perceptions – such as low trust or limited understanding – reduce compliance with recommendations provided by the AI system. Drawing from cognitive load theory, we examine two techniques of human-AI collaboration as potential remedies. In three experimental studies, we grant users decision control by empowering them to adjust the system's recommendations, and we offer explanations for the system's reasoning. We find decision control positively affects user perceptions of trust and understanding, and improves user compliance with system recommendations. Next, we isolate different effects of providing explanations that may help explain inconsistent findings in recent literature: while explanations help reenact the system's reasoning, they also increase task complexity. Further, the effectiveness of providing an explanation depends on the specific user's cognitive ability to handle complex tasks. In summary, our study shows that users benefit from enhanced decision control, while explanations – unless appropriately designed for the specific user – may even harm user perceptions and compliance. This work bears both theoretical and practical implications for the management of human-AI collaboration.
Original language | American English |
---|---|
Article number | 107714 |
Journal | Computers in Human Behavior |
Volume | 144 |
DOIs | |
State | Published - 1 Jul 2023 |
Keywords
- Decision control
- Explanations
- Human-AI collaboration
- Task complexity
- User compliance
- User trust
All Science Journal Classification (ASJC) codes
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction
- General Psychology