Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop |
Herausgeber/-innen | Guy Emerson, Natalie Schluter, Gabriel Stanovsky, Ritesh Kumar, Alexis Palmer, Nathan Schneider, Siddharth Singh, Shyam Ratan |
Seiten | 756-760 |
Seitenumfang | 5 |
ISBN (elektronisch) | 9781955917803 |
Publikationsstatus | Veröffentlicht - Juli 2022 |
Veranstaltung | 16th International Workshop on Semantic Evaluation, SemEval 2022 - Seattle, USA / Vereinigte Staaten Dauer: 14 Juli 2022 → 15 Juli 2022 |
Publikationsreihe
Name | SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop |
---|
Abstract
The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Theoretische Informatik und Mathematik
- Informatik (insg.)
- Angewandte Informatik
- Mathematik (insg.)
- Theoretische Informatik
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop. Hrsg. / Guy Emerson; Natalie Schluter; Gabriel Stanovsky; Ritesh Kumar; Alexis Palmer; Nathan Schneider; Siddharth Singh; Shyam Ratan. 2022. S. 756-760 (SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - TIB-VA at SemEval-2022 Task 5
T2 - 16th International Workshop on Semantic Evaluation, SemEval 2022
AU - Hakimov, Sherzod
AU - Cheema, Gullal S.
AU - Ewerth, Ralph
N1 - Funding Information: This work has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 812997 (CLEOPATRA ITN).
PY - 2022/7
Y1 - 2022/7
N2 - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
AB - The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features to detect misogynous memes. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. We obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the following sub-classes: shaming, stereotype, objectification, and violence.
UR - http://www.scopus.com/inward/record.url?scp=85137566496&partnerID=8YFLogxK
U2 - https://doi.org/10.48550/arXiv.2204.06299
DO - https://doi.org/10.48550/arXiv.2204.06299
M3 - Conference contribution
AN - SCOPUS:85137566496
T3 - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
SP - 756
EP - 760
BT - SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop
A2 - Emerson, Guy
A2 - Schluter, Natalie
A2 - Stanovsky, Gabriel
A2 - Kumar, Ritesh
A2 - Palmer, Alexis
A2 - Schneider, Nathan
A2 - Singh, Siddharth
A2 - Ratan, Shyam
Y2 - 14 July 2022 through 15 July 2022
ER -