Valletto: A multimodal interface for ubiquitous visual analytics

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Jan Frederik Kassel
  • Michael Rohs

External Research Organisations

  • Volkswagen AG
View graph of relations

Details

Original languageEnglish
Title of host publicationCHI EA ´18
Subtitle of host publicationExtended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery (ACM)
Pages1-6
Number of pages6
ISBN (electronic)9781450356213
Publication statusPublished - 20 Apr 2018
Event2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018 - Montreal, Canada
Duration: 21 Apr 201826 Apr 2018

Abstract

Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

Keywords

    Conversational Interface, Mobile Device, Multimodal Interaction, Ubiquitous Computing, User Experience, Visualization

ASJC Scopus subject areas

Cite this

Valletto: A multimodal interface for ubiquitous visual analytics. / Kassel, Jan Frederik; Rohs, Michael.
CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2018. p. 1-6 LBW005.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Kassel, JF & Rohs, M 2018, Valletto: A multimodal interface for ubiquitous visual analytics. in CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems., LBW005, Association for Computing Machinery (ACM), pp. 1-6, 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018, Montreal, Canada, 21 Apr 2018. https://doi.org/10.1145/3170427.3188445
Kassel, J. F., & Rohs, M. (2018). Valletto: A multimodal interface for ubiquitous visual analytics. In CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-6). Article LBW005 Association for Computing Machinery (ACM). https://doi.org/10.1145/3170427.3188445
Kassel JF, Rohs M. Valletto: A multimodal interface for ubiquitous visual analytics. In CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM). 2018. p. 1-6. LBW005 doi: 10.1145/3170427.3188445
Kassel, Jan Frederik ; Rohs, Michael. / Valletto : A multimodal interface for ubiquitous visual analytics. CHI EA ´18: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 2018. pp. 1-6
Download
@inproceedings{6c112743d95840feb69494151cec4b51,
title = "Valletto: A multimodal interface for ubiquitous visual analytics",
abstract = "Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user{\textquoteright}s daily life.",
keywords = "Conversational Interface, Mobile Device, Multimodal Interaction, Ubiquitous Computing, User Experience, Visualization",
author = "Kassel, {Jan Frederik} and Michael Rohs",
note = "Funding Information: Any opinions, findings, and conclusions expressed in this paper do not necessarily reflect the views of the Volkswagen Group. Publisher Copyright: Copyright held by the owner/author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.; 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018 ; Conference date: 21-04-2018 Through 26-04-2018",
year = "2018",
month = apr,
day = "20",
doi = "10.1145/3170427.3188445",
language = "English",
pages = "1--6",
booktitle = "CHI EA ´18",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - Valletto

T2 - 2018 CHI Conference on Human Factors in Computing Systems, CHI EA 2018

AU - Kassel, Jan Frederik

AU - Rohs, Michael

N1 - Funding Information: Any opinions, findings, and conclusions expressed in this paper do not necessarily reflect the views of the Volkswagen Group. Publisher Copyright: Copyright held by the owner/author(s). Copyright: Copyright 2020 Elsevier B.V., All rights reserved.

PY - 2018/4/20

Y1 - 2018/4/20

N2 - Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

AB - Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user’s daily life.

KW - Conversational Interface

KW - Mobile Device

KW - Multimodal Interaction

KW - Ubiquitous Computing

KW - User Experience

KW - Visualization

UR - http://www.scopus.com/inward/record.url?scp=85052018709&partnerID=8YFLogxK

U2 - 10.1145/3170427.3188445

DO - 10.1145/3170427.3188445

M3 - Conference contribution

AN - SCOPUS:85052018709

SP - 1

EP - 6

BT - CHI EA ´18

PB - Association for Computing Machinery (ACM)

Y2 - 21 April 2018 through 26 April 2018

ER -