TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Sherzod Hakimov
  • Gullal S. Cheema
  • Ralph Ewerth

Research Organisations

External Research Organisations

  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Title of host publicationSemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
EditorsGuy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan
Pages756-760
Number of pages5
ISBN (electronic)9781955917803
Publication statusPublished - Jul 2022
Event16th International Workshop on Semantic Evaluation, SemEval 2022 - Seattle, United States
Duration: 14 Jul 202215 Jul 2022

Publication series

NameSemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

Abstract

The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. / Hakimov, Sherzod; Cheema, Gullal S.; Ewerth, Ralph.
SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. ed. / Guy Emerson; Natalie Schluter; Gabriel Stanovsky; Ritesh Kumar; Alexis Palmer; Nathan Schneider; Siddharth Singh; Shyam Ratan. 2022. p. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Hakimov, S, Cheema, GS & Ewerth, R 2022, TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. in G Emerson, N Schluter, G Stanovsky, R Kumar, A Palmer, N Schneider, S Singh & S Ratan (eds), SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop, pp. 756-760, 16th International Workshop on Semantic Evaluation, SemEval 2022, Seattle, United States, 14 Jul 2022. https://doi.org/10.48550/arXiv.2204.06299, https://doi.org/10.18653/v1/2022.semeval-1.105
Hakimov, S., Cheema, G. S., & Ewerth, R. (2022). TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. In G. Emerson, N. Schluter, G. Stanovsky, R. Kumar, A. Palmer, N. Schneider, S. Singh, & S. Ratan (Eds.), SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 756-760). (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop). https://doi.org/10.48550/arXiv.2204.06299, https://doi.org/10.18653/v1/2022.semeval-1.105
Hakimov S, Cheema GS, Ewerth R. TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. In Emerson G, Schluter N, Stanovsky G, Kumar R, Palmer A, Schneider N, Singh S, Ratan S, editors, SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. 2022. p. 756-760. (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop). doi: https://doi.org/10.48550/arXiv.2204.06299, 10.18653/v1/2022.semeval-1.105
Hakimov, Sherzod ; Cheema, Gullal S. ; Ewerth, Ralph. / TIB-VA at SemEval-2022 Task 5 : A Multimodal Architecture for the Detection and Classification of Misogynous Memes. SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. editor / Guy Emerson ; Natalie Schluter ; Gabriel Stanovsky ; Ritesh Kumar ; Alexis Palmer ; Nathan Schneider ; Siddharth Singh ; Shyam Ratan. 2022. pp. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).
Download
@inproceedings{a1b2924bea494b9fafa5b0889a0f10b9,
title = "TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes",
abstract = "The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.",
author = "Sherzod Hakimov and Cheema, {Gullal S.} and Ralph Ewerth",
note = "Funding Information: This work has received funding from the European Union{\textquoteright}s Horizon 2020 research and innovation program under the Marie Sk{\l}odowska-Curie grant agreement No. 812997 (CLEOPATRA ITN). ; 16th International Workshop on Semantic Evaluation, SemEval 2022 ; Conference date: 14-07-2022 Through 15-07-2022",
year = "2022",
month = jul,
doi = "https://doi.org/10.48550/arXiv.2204.06299",
language = "English",
series = "SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop",
pages = "756--760",
editor = "Guy Emerson and Natalie Schluter and Gabriel Stanovsky and Ritesh Kumar and Alexis Palmer and Nathan Schneider and Siddharth Singh and Shyam Ratan",
booktitle = "SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop",

}

Download

TY - GEN

T1 - TIB-VA at SemEval-2022 Task 5

T2 - 16th International Workshop on Semantic Evaluation, SemEval 2022

AU - Hakimov, Sherzod

AU - Cheema, Gullal S.

AU - Ewerth, Ralph

N1 - Funding Information: This work has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 812997 (CLEOPATRA ITN).

PY - 2022/7

Y1 - 2022/7

N2 - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

AB - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

UR - http://www.scopus.com/inward/record.url?scp=85137566496&partnerID=8YFLogxK

U2 - https://doi.org/10.48550/arXiv.2204.06299

DO - https://doi.org/10.48550/arXiv.2204.06299

M3 - Conference contribution

AN - SCOPUS:85137566496

T3 - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

SP - 756

EP - 760

BT - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

A2 - Emerson, Guy

A2 - Schluter, Natalie

A2 - Stanovsky, Gabriel

A2 - Kumar, Ritesh

A2 - Palmer, Alexis

A2 - Schneider, Nathan

A2 - Singh, Siddharth

A2 - Ratan, Shyam

Y2 - 14 July 2022 through 15 July 2022

ER -