Valletto: A multimodal interface for ubiquitous visual analytics

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Jan Frederik Kassel
  • Michael Rohs

Externe Organisationen

  • Volkswagen AG
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCHI EA ´18
UntertitelExtended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
Herausgeber (Verlag)Association for Computing Machinery (ACM)
Seiten1-6
Seitenumfang6
ISBN (elektronisch)9781450356213
PublikationsstatusVeröffentlicht - 20 Apr. 2018
Veranstaltung2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018 - Montreal, Kanada
Dauer: 21 Apr. 201826 Apr. 2018

Abstract

Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

ASJC Scopus Sachgebiete

Zitieren

Valletto: A multimodal interface for ubiquitous visual analytics. / Kassel, Jan Frederik; Rohs, Michael.
CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2018. S. 1-6 LBW005.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Kassel, JF & Rohs, M 2018, Valletto: A multimodal interface for ubiquitous visual analytics. in CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems., LBW005, Association for Computing Machinery (ACM), S. 1-6, 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018, Montreal, Kanada, 21 Apr. 2018. https://doi.org/10.1145/3170427.3188445
Kassel, J. F., & Rohs, M. (2018). Valletto: A multimodal interface for ubiquitous visual analytics. In CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (S. 1-6). Artikel LBW005 Association for Computing Machinery (ACM). https://doi.org/10.1145/3170427.3188445
Kassel JF, Rohs M. Valletto: A multimodal interface for ubiquitous visual analytics. in CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM). 2018. S. 1-6. LBW005 doi: 10.1145/3170427.3188445
Kassel, Jan Frederik ; Rohs, Michael. / Valletto : A multimodal interface for ubiquitous visual analytics. CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2018. S. 1-6
Download
@inproceedings{6c112743d95840feb69494151cec4b51,
title = "Valletto: A multimodal interface for ubiquitous visual analytics",
abstract = "Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user{\textquoteright}s daily life.",
keywords = "Conversational Interface, Mobile Device, Multimodal Interaction, Ubiquitous Computing, User Experience, Visualization",
author = "Kassel, {Jan Frederik} and Michael Rohs",
note = "Funding Information: Any opinions, findings, and conclusions expressed in this paper do not necessarily reflect the views of the Volkswagen Group. Publisher Copyright: Copyright held by the owner/author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.; 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018 ; Conference date: 21-04-2018 Through 26-04-2018",
year = "2018",
month = apr,
day = "20",
doi = "10.1145/3170427.3188445",
language = "English",
pages = "1--6",
booktitle = "CHI EA ´18",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - Valletto

T2 - 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018

AU - Kassel, Jan Frederik

AU - Rohs, Michael

N1 - Funding Information: Any opinions, findings, and conclusions expressed in this paper do not necessarily reflect the views of the Volkswagen Group. Publisher Copyright: Copyright held by the owner/author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2018/4/20

Y1 - 2018/4/20

N2 - Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

AB - Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

KW - Conversational Interface

KW - Mobile Device

KW - Multimodal Interaction

KW - Ubiquitous Computing

KW - User Experience

KW - Visualization

UR - http://www.scopus.com/inward/record.url?scp=85052018709&partnerID=8YFLogxK

U2 - 10.1145/3170427.3188445

DO - 10.1145/3170427.3188445

M3 - Conference contribution

AN - SCOPUS:85052018709

SP - 1

EP - 6

BT - CHI EA ´18

PB - Association for Computing Machinery (ACM)

Y2 - 21 April 2018 through 26 April 2018

ER -