Details
Original language | English |
---|---|
Title of host publication | CLEF-WN 2023 |
Subtitle of host publication | CLEF 2023 Working Notes |
Pages | 219-235 |
Number of pages | 17 |
Publication status | Published - 4 Oct 2023 |
Event | 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023 - Thessaloniki, Greece Duration: 18 Sept 2023 → 21 Sept 2023 |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Publisher | CEUR Workshop Proceedings |
Volume | 3497 |
ISSN (Print) | 1613-0073 |
Abstract
We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.
Keywords
- Check-worthiness, fact-checking, multilinguality, multimodality
ASJC Scopus subject areas
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
CLEF-WN 2023: CLEF 2023 Working Notes. 2023. p. 219-235 (CEUR Workshop Proceedings; Vol. 3497).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Overview of the CLEF-2023 CheckThat!
T2 - 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023
AU - Alam, Firoj
AU - Barrón-Cedeño, Alberto
AU - Cheema, Gullal S.
AU - Shahi, Gautam Kishore
AU - Hakimov, Sherzod
AU - Hasanain, Maram
AU - Li, Chengkai
AU - Míguez, Rubén
AU - Mubarak, Hamdy
AU - Zaghouani, Wajdi
AU - Nakov, Preslav
N1 - Funding Information: The work of F. Alam, M. Hasanain and W. Zaghouani is partially supported by NPRP 13S-0206-200281 and NPRP 14C-0916-210015 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors.
PY - 2023/10/4
Y1 - 2023/10/4
N2 - We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.
AB - We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.
KW - Check-worthiness
KW - fact-checking
KW - multilinguality
KW - multimodality
UR - http://www.scopus.com/inward/record.url?scp=85175626253&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85175626253
T3 - CEUR Workshop Proceedings
SP - 219
EP - 235
BT - CLEF-WN 2023
Y2 - 18 September 2023 through 21 September 2023
ER -