Details
Original language | English |
---|---|
Title of host publication | CHI '17 |
Subtitle of host publication | Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems |
Publisher | Association for Computing Machinery (ACM) |
Pages | 3729-3740 |
Number of pages | 12 |
ISBN (electronic) | 9781450346559 |
Publication status | Published - 2 May 2017 |
Event | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017 - Denver, United States Duration: 6 May 2017 → 11 May 2017 |
Abstract
Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
Keywords
- 3D output, Augmented reality, Guidance, Haptic feedback, Navigation, Spatial interaction, Vibrotactile, Virtual reality
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Computer Graphics and Computer-Aided Design
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2017. p. 3729-3740.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - HapticHead
T2 - 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017
AU - Kaul, Oliver Beren
AU - Rohs, Michael
N1 - Publisher Copyright: © 2017 ACM. Copyright: Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2017/5/2
Y1 - 2017/5/2
N2 - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
AB - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
KW - 3D output
KW - Augmented reality
KW - Guidance
KW - Haptic feedback
KW - Navigation
KW - Spatial interaction
KW - Vibrotactile
KW - Virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85029123271&partnerID=8YFLogxK
U2 - 10.1145/3025453.3025684
DO - 10.1145/3025453.3025684
M3 - Conference contribution
AN - SCOPUS:85029123271
SP - 3729
EP - 3740
BT - CHI '17
PB - Association for Computing Machinery (ACM)
Y2 - 6 May 2017 through 11 May 2017
ER -