Abstract
Transparency is an important aspect of human–robot interaction (HRI), as it can improve system trust and usability, leading to improved communication and performance. However, most transparency models focus only on the amount of information given to users. In this article, we propose a bidirectional transparency model, termed the transparency-based action (TBA) model, which allows the robot to take actions based on transparency information received from the human (robot-of-human and human-to-robot), in addition to providing transparency information to the human (robot-to-human). To examine the impact of a three-level (High, Medium and Low) TBA model on acceptance and HRI, we first implemented the model on a robotic system trainer in two pilot studies (with students as participants). Based on the results of these studies, the Medium TBA level was not included in the subsequent main experiment, which was conducted with older adults (aged 75–85). In that experiment, two TBA levels were compared: Low (basic information including only robot-to-human transparency) and High (including additional information relating to predicted outcomes with robot-of-human and human-to-robot transparency). The results revealed a statistically significant difference between the two TBA levels of the model in terms of perceived usefulness, ease of use, and attitude. The High TBA level was preferred by users and yielded improved user acceptance.
Original language | American English |
---|---|
Article number | 15 |
Journal | ACM Transactions on Human-Robot Interaction |
Volume | 14 |
Issue number | 1 |
DOIs | |
State | Published - 19 Dec 2024 |
Keywords
- HRI
- Older adults
- Robotic trainer system
- Transparency
- User acceptance
All Science Journal Classification (ASJC) codes
- Human-Computer Interaction
- Artificial Intelligence