HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Oliver Beren Kaul
  • Michael Rohs
View graph of relations

Details

Original languageEnglish
Title of host publicationCHI '17
Subtitle of host publicationProceedings of the 2017 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery (ACM)
Pages3729-3740
Number of pages12
ISBN (electronic)9781450346559
Publication statusPublished - 2 May 2017
Event2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017 - Denver, United States
Duration: 6 May 201711 May 2017

Abstract

Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.

Keywords

    3D output, Augmented reality, Guidance, Haptic feedback, Navigation, Spatial interaction, Vibrotactile, Virtual reality

ASJC Scopus subject areas

Cite this

HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. / Kaul, Oliver Beren; Rohs, Michael.
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2017. p. 3729-3740.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Kaul, OB & Rohs, M 2017, HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. in CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), pp. 3729-3740, 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017, Denver, United States, 6 May 2017. https://doi.org/10.1145/3025453.3025684
Kaul, O. B., & Rohs, M. (2017). HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. In CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 3729-3740). Association for Computing Machinery (ACM). https://doi.org/10.1145/3025453.3025684
Kaul OB, Rohs M. HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. In CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM). 2017. p. 3729-3740 doi: 10.1145/3025453.3025684
Kaul, Oliver Beren ; Rohs, Michael. / HapticHead : A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2017. pp. 3729-3740
Download
@inproceedings{39d889b8719f4041a54d478ca9af4245,
title = "HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality",
abstract = "Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.",
keywords = "3D output, Augmented reality, Guidance, Haptic feedback, Navigation, Spatial interaction, Vibrotactile, Virtual reality",
author = "Kaul, {Oliver Beren} and Michael Rohs",
note = "Publisher Copyright: {\textcopyright} 2017 ACM. Copyright: Copyright 2018 Elsevier B.V., All rights reserved.; 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017 ; Conference date: 06-05-2017 Through 11-05-2017",
year = "2017",
month = may,
day = "2",
doi = "10.1145/3025453.3025684",
language = "English",
pages = "3729--3740",
booktitle = "CHI '17",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - HapticHead

T2 - 2017 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI 2017

AU - Kaul, Oliver Beren

AU - Rohs, Michael

N1 - Publisher Copyright: © 2017 ACM. Copyright: Copyright 2018 Elsevier B.V., All rights reserved.

PY - 2017/5/2

Y1 - 2017/5/2

N2 - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.

AB - Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4 % vs. 54.2 % success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is - as expected - more precise (99.7 % success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a headmounted display.

KW - 3D output

KW - Augmented reality

KW - Guidance

KW - Haptic feedback

KW - Navigation

KW - Spatial interaction

KW - Vibrotactile

KW - Virtual reality

UR - http://www.scopus.com/inward/record.url?scp=85029123271&partnerID=8YFLogxK

U2 - 10.1145/3025453.3025684

DO - 10.1145/3025453.3025684

M3 - Conference contribution

AN - SCOPUS:85029123271

SP - 3729

EP - 3740

BT - CHI '17

PB - Association for Computing Machinery (ACM)

Y2 - 6 May 2017 through 11 May 2017

ER -