Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the 14th International Workshop on Semantic Evaluation |
Herausgeber/-innen | Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova |
Erscheinungsort | Barcelona |
Seiten | 1377-1414 |
Seitenumfang | 38 |
Publikationsstatus | Veröffentlicht - Dez. 2020 |
Extern publiziert | Ja |
Veranstaltung | 14th International Workshops on Semantic Evaluation, SemEval 2020 - Barcelona, Spanien Dauer: 12 Dez. 2020 → 13 Dez. 2020 |
Abstract
We present the results and the main findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles. The task featured two subtasks. Subtask SI is about Span Identification: given a plain-text document, spot the specific text fragments containing propaganda. Subtask TC is about Technique Classification: given a specific text fragment, in the context of a full document, determine the propaganda technique it uses, choosing from an inventory of 14 possible propaganda techniques. The task attracted a large number of participants: 250 teams signed up to participate and 44 made a submission on the test set. In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For both subtasks, the best systems used pre-trained Transformers and ensembles.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Theoretische Informatik und Mathematik
- Informatik (insg.)
- Angewandte Informatik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the 14th International Workshop on Semantic Evaluation. Hrsg. / Aurelie Herbelot; Xiaodan Zhu; Alexis Palmer; Nathan Schneider; Jonathan May; Ekaterina Shutova. Barcelona , 2020. S. 1377-1414.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - SemEval-2020 Task 11
T2 - 14th International Workshops on Semantic Evaluation, SemEval 2020
AU - da San Martino, Giovanni
AU - Barrón-Cedeño, Alberto
AU - Wachsmuth, Henning
AU - Petrov, Rostislav
AU - Nakov, Preslav
N1 - Funding Information: We thank the anonymous reviewers for their constructive comments and suggestions, which have helped us improve the final version of this paper. We further thank Anton Chernyavskiy for pointing us to the bug in the evaluation script. The task is organized within the Propaganda Analysis Project,11 part of the Tanbih project.12 Tanbih aims to limit the effect of “fake news”, propaganda, and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking.
PY - 2020/12
Y1 - 2020/12
N2 - We present the results and the main findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles. The task featured two subtasks. Subtask SI is about Span Identification: given a plain-text document, spot the specific text fragments containing propaganda. Subtask TC is about Technique Classification: given a specific text fragment, in the context of a full document, determine the propaganda technique it uses, choosing from an inventory of 14 possible propaganda techniques. The task attracted a large number of participants: 250 teams signed up to participate and 44 made a submission on the test set. In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For both subtasks, the best systems used pre-trained Transformers and ensembles.
AB - We present the results and the main findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles. The task featured two subtasks. Subtask SI is about Span Identification: given a plain-text document, spot the specific text fragments containing propaganda. Subtask TC is about Technique Classification: given a specific text fragment, in the context of a full document, determine the propaganda technique it uses, choosing from an inventory of 14 possible propaganda techniques. The task attracted a large number of participants: 250 teams signed up to participate and 44 made a submission on the test set. In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For both subtasks, the best systems used pre-trained Transformers and ensembles.
UR - http://www.scopus.com/inward/record.url?scp=85123925202&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2009.02696
DO - 10.48550/arXiv.2009.02696
M3 - Conference contribution
AN - SCOPUS:85123925202
SN - 9781952148316
SP - 1377
EP - 1414
BT - Proceedings of the 14th International Workshop on Semantic Evaluation
A2 - Herbelot, Aurelie
A2 - Zhu, Xiaodan
A2 - Palmer, Alexis
A2 - Schneider, Nathan
A2 - May, Jonathan
A2 - Shutova, Ekaterina
CY - Barcelona
Y2 - 12 December 2020 through 13 December 2020
ER -