MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Marc A. Kastner
  • Gullal S. Cheema
  • Sherzod Hakimov
  • Noa Garcia

Research Organisations

External Research Organisations

  • Kyoto University
  • University of Potsdam
  • Osaka University
View graph of relations

Details

Original languageEnglish
Title of host publicationICMR '24
Subtitle of host publicationProceedings of the 2024 International Conference on Multimedia Retrieval
Pages1342-1344
Number of pages3
ISBN (electronic)9798400706028
Publication statusPublished - 7 Jun 2024
Event2024 International Conference on Multimedia Retrieval, ICMR 2024 - Phuket, Thailand
Duration: 10 Jun 202414 Jun 2024

Abstract

Multimodal human understanding and analysis are emerging research areas that cut through several disciplines like Computer Vision (CV), Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction (HCI), and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights on perceptual understanding through signs and symbols across multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media. The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop is an interactive event and includes keynotes by relevant experts, a poster session, research presentations and discussion.

Keywords

    human understanding, image-text relations, machine learning, multimodality, semiotics, social media, web

ASJC Scopus subject areas

Cite this

MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media. / Kastner, Marc A.; Cheema, Gullal S.; Hakimov, Sherzod et al.
ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval. 2024. p. 1342-1344.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Kastner, MA, Cheema, GS, Hakimov, S & Garcia, N 2024, MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media. in ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval. pp. 1342-1344, 2024 International Conference on Multimedia Retrieval, ICMR 2024, Phuket, Thailand, 10 Jun 2024. https://doi.org/10.1145/3652583.3658893
Kastner, M. A., Cheema, G. S., Hakimov, S., & Garcia, N. (2024). MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media. In ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval (pp. 1342-1344) https://doi.org/10.1145/3652583.3658893
Kastner MA, Cheema GS, Hakimov S, Garcia N. MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media. In ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval. 2024. p. 1342-1344 doi: 10.1145/3652583.3658893
Kastner, Marc A. ; Cheema, Gullal S. ; Hakimov, Sherzod et al. / MUWS 2024 : The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media. ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval. 2024. pp. 1342-1344
Download
@inproceedings{801935a4b72c4cd58ab7309c863eef7d,
title = "MUWS 2024: The 3rd International Workshop on Multimodal Human Understanding for the Web and Social Media",
abstract = "Multimodal human understanding and analysis are emerging research areas that cut through several disciplines like Computer Vision (CV), Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction (HCI), and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights on perceptual understanding through signs and symbols across multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media. The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop is an interactive event and includes keynotes by relevant experts, a poster session, research presentations and discussion.",
keywords = "human understanding, image-text relations, machine learning, multimodality, semiotics, social media, web",
author = "Kastner, {Marc A.} and Cheema, {Gullal S.} and Sherzod Hakimov and Noa Garcia",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright held by the owner/author(s).; 2024 International Conference on Multimedia Retrieval, ICMR 2024 ; Conference date: 10-06-2024 Through 14-06-2024",
year = "2024",
month = jun,
day = "7",
doi = "10.1145/3652583.3658893",
language = "English",
pages = "1342--1344",
booktitle = "ICMR '24",

}

Download

TY - GEN

T1 - MUWS 2024

T2 - 2024 International Conference on Multimedia Retrieval, ICMR 2024

AU - Kastner, Marc A.

AU - Cheema, Gullal S.

AU - Hakimov, Sherzod

AU - Garcia, Noa

N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).

PY - 2024/6/7

Y1 - 2024/6/7

N2 - Multimodal human understanding and analysis are emerging research areas that cut through several disciplines like Computer Vision (CV), Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction (HCI), and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights on perceptual understanding through signs and symbols across multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media. The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop is an interactive event and includes keynotes by relevant experts, a poster session, research presentations and discussion.

AB - Multimodal human understanding and analysis are emerging research areas that cut through several disciplines like Computer Vision (CV), Natural Language Processing (NLP), Speech Processing, Human-Computer Interaction (HCI), and Multimedia. Several multimodal learning techniques have recently shown the benefit of combining multiple modalities in image-text, audio-visual and video representation learning and various downstream multimodal tasks. At the core, these methods focus on modelling the modalities and their complex interactions by using large amounts of data, different loss functions and deep neural network architectures. However, for many Web and Social media applications, there is the need to model the human, including the understanding of human behaviour and perception. For this, it becomes important to consider interdisciplinary approaches, including social sciences and psychology. The core is understanding various cross-modal relations, quantifying bias such as social biases, and the applicability of models to real-world problems. Interdisciplinary theories such as semiotics or gestalt psychology can provide additional insights on perceptual understanding through signs and symbols across multiple modalities. In general, these theories provide a compelling view of multimodality and perception that can further expand computational research and multimedia applications on the Web and Social media. The theme of the MUWS workshop, multimodal human understanding, includes various interdisciplinary challenges related to social bias analyses, multimodal representation learning, detection of human impressions or sentiment, hate speech, sarcasm in multimodal data, multimodal rhetoric and semantics, and related topics. The MUWS workshop is an interactive event and includes keynotes by relevant experts, a poster session, research presentations and discussion.

KW - human understanding

KW - image-text relations

KW - machine learning

KW - multimodality

KW - semiotics

KW - social media

KW - web

UR - http://www.scopus.com/inward/record.url?scp=85199205906&partnerID=8YFLogxK

U2 - 10.1145/3652583.3658893

DO - 10.1145/3652583.3658893

M3 - Conference contribution

AN - SCOPUS:85199205906

SP - 1342

EP - 1344

BT - ICMR '24

Y2 - 10 June 2024 through 14 June 2024

ER -