Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Firoj Alam
  • Alberto Barrón-Cedeño
  • Gullal S. Cheema
  • Gautam Kishore Shahi
  • Sherzod Hakimov
  • Maram Hasanain
  • Chengkai Li
  • Rubén Míguez
  • Hamdy Mubarak
  • Wajdi Zaghouani
  • Preslav Nakov

Organisationseinheiten

Externe Organisationen

  • Qatar Computing Research institute
  • Università di Bologna
  • Universität Duisburg-Essen
  • Universität Potsdam
  • University of Texas at Arlington
  • Newtral Media Audiovisual
  • Hamad bin Khalifa University
  • Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksCLEF-WN 2023
UntertitelCLEF 2023 Working Notes
Seiten219-235
Seitenumfang17
PublikationsstatusVeröffentlicht - 4 Okt. 2023
Veranstaltung24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023 - Thessaloniki, Griechenland
Dauer: 18 Sept. 202321 Sept. 2023

Publikationsreihe

NameCEUR Workshop Proceedings
Herausgeber (Verlag)CEUR Workshop Proceedings
Band3497
ISSN (Print)1613-0073

Abstract

We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.

ASJC Scopus Sachgebiete

Zitieren

Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content. / Alam, Firoj; Barrón-Cedeño, Alberto; Cheema, Gullal S. et al.
CLEF-WN 2023: CLEF 2023 Working Notes. 2023. S. 219-235 (CEUR Workshop Proceedings; Band 3497).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Alam, F, Barrón-Cedeño, A, Cheema, GS, Shahi, GK, Hakimov, S, Hasanain, M, Li, C, Míguez, R, Mubarak, H, Zaghouani, W & Nakov, P 2023, Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content. in CLEF-WN 2023: CLEF 2023 Working Notes. CEUR Workshop Proceedings, Bd. 3497, S. 219-235, 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023, Thessaloniki, Griechenland, 18 Sept. 2023. <https://ceur-ws.org/Vol-3497/paper-019.pdf>
Alam, F., Barrón-Cedeño, A., Cheema, G. S., Shahi, G. K., Hakimov, S., Hasanain, M., Li, C., Míguez, R., Mubarak, H., Zaghouani, W., & Nakov, P. (2023). Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content. In CLEF-WN 2023: CLEF 2023 Working Notes (S. 219-235). (CEUR Workshop Proceedings; Band 3497). https://ceur-ws.org/Vol-3497/paper-019.pdf
Alam F, Barrón-Cedeño A, Cheema GS, Shahi GK, Hakimov S, Hasanain M et al. Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content. in CLEF-WN 2023: CLEF 2023 Working Notes. 2023. S. 219-235. (CEUR Workshop Proceedings).
Alam, Firoj ; Barrón-Cedeño, Alberto ; Cheema, Gullal S. et al. / Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content. CLEF-WN 2023: CLEF 2023 Working Notes. 2023. S. 219-235 (CEUR Workshop Proceedings).
Download
@inproceedings{2e289eb7729f46abaf370991f2dfd6ff,
title = "Overview of the CLEF-2023 CheckThat!: Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content",
abstract = "We present an overview of CheckThat! Lab{\textquoteright}s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.",
keywords = "Check-worthiness, fact-checking, multilinguality, multimodality",
author = "Firoj Alam and Alberto Barr{\'o}n-Cede{\~n}o and Cheema, {Gullal S.} and Shahi, {Gautam Kishore} and Sherzod Hakimov and Maram Hasanain and Chengkai Li and Rub{\'e}n M{\'i}guez and Hamdy Mubarak and Wajdi Zaghouani and Preslav Nakov",
note = "Funding Information: The work of F. Alam, M. Hasanain and W. Zaghouani is partially supported by NPRP 13S-0206-200281 and NPRP 14C-0916-210015 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors. ; 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023 ; Conference date: 18-09-2023 Through 21-09-2023",
year = "2023",
month = oct,
day = "4",
language = "English",
series = "CEUR Workshop Proceedings",
publisher = "CEUR Workshop Proceedings",
pages = "219--235",
booktitle = "CLEF-WN 2023",

}

Download

TY - GEN

T1 - Overview of the CLEF-2023 CheckThat!

T2 - 24th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF-WN 2023

AU - Alam, Firoj

AU - Barrón-Cedeño, Alberto

AU - Cheema, Gullal S.

AU - Shahi, Gautam Kishore

AU - Hakimov, Sherzod

AU - Hasanain, Maram

AU - Li, Chengkai

AU - Míguez, Rubén

AU - Mubarak, Hamdy

AU - Zaghouani, Wajdi

AU - Nakov, Preslav

N1 - Funding Information: The work of F. Alam, M. Hasanain and W. Zaghouani is partially supported by NPRP 13S-0206-200281 and NPRP 14C-0916-210015 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors.

PY - 2023/10/4

Y1 - 2023/10/4

N2 - We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.

AB - We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.

KW - Check-worthiness

KW - fact-checking

KW - multilinguality

KW - multimodality

UR - http://www.scopus.com/inward/record.url?scp=85175626253&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85175626253

T3 - CEUR Workshop Proceedings

SP - 219

EP - 235

BT - CLEF-WN 2023

Y2 - 18 September 2023 through 21 September 2023

ER -