Abstract
As most computerized information is visual, it is not directly accessible to the blind and visually impaired. This challenge is especially great when discussing graphical virtual environments. This is especially unfortunate as such environments hold great potential for the blind community for uses such as social interaction, online education and especially for safe mobility training from the safety and comfort of their home. While several previous attempts have increased the accessibility of these environments current tools are still far from making them properly accessible.We suggest the use of Sensory Substitution Devices (SSDs) as another step in increasing the accessibility of such environments by offering the user more raw "visual" information about the scene via other senses. Specifically, we explore here the use of a minimal-SSD based upon the EyeCane, which uses point depth distance information of a single pixel, for tasks such as virtual shape recognition and virtual navigation. We show both success and the fast learned use of this transformation by our users in these tasks, demonstrating the potential for this approach and end with a call for its addition to accessibility toolboxes.
Original language | American English |
---|---|
Pages (from-to) | 398-406 |
Number of pages | 9 |
Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Volume | 8513 LNCS |
Issue number | PART 1 |
DOIs | |
State | Published - 1 Jan 2014 |
Externally published | Yes |
Event | 8th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2014 - Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014 - Heraklion, Greece Duration: 22 Jun 2014 → 27 Jun 2014 |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- General Computer Science