Abstract
Providing educators with understandable, actionable, and trustworthy insights drawn from large-scope heterogeneous learning data is of paramount importance in achieving the full potential of artificial intelligence (AI) in educational settings. Explainable AI (XAI)—contrary to the traditional “black-box” approach—helps fulfilling this important goal. We present a case study of building prediction models for undergraduate students’ learning achievement in a Computer Science course, where the development process involves the course instructor as a co-designer, and with the use of XAI technologies to explain the underlying reasoning of several machine learning predictions. The explanations enhance the transparency of the predictions and open the door for educators to share their judgments and insights. It further enables us to refine the predictions by incorporating the educators’ contextual knowledge of the course and of the students. Through this human-AI collaboration process, we demonstrate how to achieve a more accountable understanding of students’ learning and drive towards transparent and trustworthy student learning achievement prediction by keeping instructors in the loop. Our study highlights that trustworthy AI in education should emphasize not only the interpretability of the predicted outcomes and prediction process, but also the incorporation of subject-matter experts throughout the development of prediction models.
Original language | English |
---|---|
Pages (from-to) | 3075-3096 |
Number of pages | 22 |
Journal | Education and Information Technologies |
Volume | 29 |
Issue number | 3 |
DOIs | |
State | Published - Feb 2024 |
Keywords
- Co-design
- Explainable AI (XAI)
- Human-centered AI
- Learning analytics
- Student learning achievement prediction
- Transparent and trustworthy AI
All Science Journal Classification (ASJC) codes
- Education
- Library and Information Sciences