Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | CHI '17 |
Untertitel | Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems |
Herausgeber (Verlag) | Association for Computing Machinery (ACM) |
Seiten | 3729-3740 |
Seitenumfang | 12 |
ISBN (elektronisch) | 9781450346559 |
Publikationsstatus | Veröffentlicht - 2 Mai 2017 |
Veranstaltung | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017 - Denver, USA / Vereinigte Staaten Dauer: 6 Mai 2017 → 11 Mai 2017 |
Abstract
Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Mensch-Maschine-Interaktion
- Informatik (insg.)
- Computergrafik und computergestütztes Design
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2017. S. 3729-3740.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - HapticHead
T2 - 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017
AU - Kaul, Oliver Beren
AU - Rohs, Michael
N1 - Publisher Copyright: © 2017 ACM. Copyright: Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2017/5/2
Y1 - 2017/5/2
N2 - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
AB - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.
KW - 3D output
KW - Augmented reality
KW - Guidance
KW - Haptic feedback
KW - Navigation
KW - Spatial interaction
KW - Vibrotactile
KW - Virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85029123271&partnerID=8YFLogxK
U2 - 10.1145/3025453.3025684
DO - 10.1145/3025453.3025684
M3 - Conference contribution
AN - SCOPUS:85029123271
SP - 3729
EP - 3740
BT - CHI '17
PB - Association for Computing Machinery (ACM)
Y2 - 6 May 2017 through 11 May 2017
ER -