TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Sherzod Hakimov
  • Gullal S. Cheema
  • Ralph Ewerth

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksSemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
Herausgeber/-innenGuy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan
Seiten756-760
Seitenumfang5
ISBN (elektronisch)9781955917803
PublikationsstatusVeröffentlicht - Juli 2022
Veranstaltung16th International Workshop on Semantic Evaluation, SemEval 2022 - Seattle, USA / Vereinigte Staaten
Dauer: 14 Juli 202215 Juli 2022

Publikationsreihe

NameSemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

Abstract

The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

Zitieren

TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. / Hakimov, Sherzod; Cheema, Gullal S.; Ewerth, Ralph.
SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. Hrsg. / Guy Emerson; Natalie Schluter; Gabriel Stanovsky; Ritesh Kumar; Alexis Palmer; Nathan Schneider; Siddharth Singh; Shyam Ratan. 2022. S. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Hakimov, S, Cheema, GS & Ewerth, R 2022, TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. in G Emerson, N Schluter, G Stanovsky, R Kumar, A Palmer, N Schneider, S Singh & S Ratan (Hrsg.), SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop, S. 756-760, 16th International Workshop on Semantic Evaluation, SemEval 2022, Seattle, USA / Vereinigte Staaten, 14 Juli 2022. https://doi.org/10.48550/arXiv.2204.06299, https://doi.org/10.18653/v1/2022.semeval-1.105
Hakimov, S., Cheema, G. S., & Ewerth, R. (2022). TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. In G. Emerson, N. Schluter, G. Stanovsky, R. Kumar, A. Palmer, N. Schneider, S. Singh, & S. Ratan (Hrsg.), SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (S. 756-760). (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop). https://doi.org/10.48550/arXiv.2204.06299, https://doi.org/10.18653/v1/2022.semeval-1.105
Hakimov S, Cheema GS, Ewerth R. TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes. in Emerson G, Schluter N, Stanovsky G, Kumar R, Palmer A, Schneider N, Singh S, Ratan S, Hrsg., SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. 2022. S. 756-760. (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop). doi: https://doi.org/10.48550/arXiv.2204.06299, 10.18653/v1/2022.semeval-1.105
Hakimov, Sherzod ; Cheema, Gullal S. ; Ewerth, Ralph. / TIB-VA at SemEval-2022 Task 5 : A Multimodal Architecture for the Detection and Classification of Misogynous Memes. SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. Hrsg. / Guy Emerson ; Natalie Schluter ; Gabriel Stanovsky ; Ritesh Kumar ; Alexis Palmer ; Nathan Schneider ; Siddharth Singh ; Shyam Ratan. 2022. S. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).
Download
@inproceedings{a1b2924bea494b9fafa5b0889a0f10b9,
title = "TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes",
abstract = "The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.",
author = "Sherzod Hakimov and Cheema, {Gullal S.} and Ralph Ewerth",
note = "Funding Information: This work has received funding from the European Union{\textquoteright}s Horizon 2020 research and innovation program under the Marie Sk{\l}odowska-Curie grant agreement No. 812997 (CLEOPATRA ITN). ; 16th International Workshop on Semantic Evaluation, SemEval 2022 ; Conference date: 14-07-2022 Through 15-07-2022",
year = "2022",
month = jul,
doi = "https://doi.org/10.48550/arXiv.2204.06299",
language = "English",
series = "SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop",
pages = "756--760",
editor = "Guy Emerson and Natalie Schluter and Gabriel Stanovsky and Ritesh Kumar and Alexis Palmer and Nathan Schneider and Siddharth Singh and Shyam Ratan",
booktitle = "SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop",

}

Download

TY - GEN

T1 - TIB-VA at SemEval-2022 Task 5

T2 - 16th International Workshop on Semantic Evaluation, SemEval 2022

AU - Hakimov, Sherzod

AU - Cheema, Gullal S.

AU - Ewerth, Ralph

N1 - Funding Information: This work has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 812997 (CLEOPATRA ITN).

PY - 2022/7

Y1 - 2022/7

N2 - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

AB - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.

UR - http://www.scopus.com/inward/record.url?scp=85137566496&partnerID=8YFLogxK

U2 - https://doi.org/10.48550/arXiv.2204.06299

DO - https://doi.org/10.48550/arXiv.2204.06299

M3 - Conference contribution

AN - SCOPUS:85137566496

T3 - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

SP - 756

EP - 760

BT - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop

A2 - Emerson, Guy

A2 - Schluter, Natalie

A2 - Stanovsky, Gabriel

A2 - Kumar, Ritesh

A2 - Palmer, Alexis

A2 - Schneider, Nathan

A2 - Singh, Siddharth

A2 - Ratan, Shyam

Y2 - 14 July 2022 through 15 July 2022

ER -