Details
Original language | English |
---|---|
Title of host publication | SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop |
Editors | Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan |
Pages | 756-760 |
Number of pages | 5 |
ISBN (electronic) | 9781955917803 |
Publication status | Published - Jul 2022 |
Event | 16th International Workshop on Semantic Evaluation, SemEval 2022 - Seattle, United States Duration: 14 Jul 2022 → 15 Jul 2022 |
Publication series
Name | SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop |
---|
Abstract
The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
ASJC Scopus subject areas
- Computer Science(all)
- Computational Theory and Mathematics
- Computer Science(all)
- Computer Science Applications
- Mathematics(all)
- Theoretical Computer Science
Sustainable Development Goals
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. ed. / Guy Emerson; Natalie Schluter; Gabriel Stanovsky; Ritesh Kumar; Alexis Palmer; Nathan Schneider; Siddharth Singh; Shyam Ratan. 2022. p. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - TIB-VA at SemEval-2022 Task 5
T2 - 16th International Workshop on Semantic Evaluation, SemEval 2022
AU - Hakimov, Sherzod
AU - Cheema, Gullal S.
AU - Ewerth, Ralph
N1 - Funding Information: This work has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 812997 (CLEOPATRA ITN).
PY - 2022/7
Y1 - 2022/7
N2 - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
AB - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
UR - http://www.scopus.com/inward/record.url?scp=85137566496&partnerID=8YFLogxK
U2 - https://doi.org/10.48550/arXiv.2204.06299
DO - https://doi.org/10.48550/arXiv.2204.06299
M3 - Conference contribution
AN - SCOPUS:85137566496
T3 - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
SP - 756
EP - 760
BT - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
A2 - Emerson, Guy
A2 - Schluter, Natalie
A2 - Stanovsky, Gabriel
A2 - Kumar, Ritesh
A2 - Palmer, Alexis
A2 - Schneider, Nathan
A2 - Singh, Siddharth
A2 - Ratan, Shyam
Y2 - 14 July 2022 through 15 July 2022
ER -