Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Experimental IR Meets Multilinguality, Multimodality, and Interaction |
Untertitel | 14th International Conference of the CLEF Association, CLEF 2023, Thessaloniki, Greece, September 18–21, 2023, Proceedings |
Herausgeber/-innen | Avi Arampatzis, Evangelos Kanoulas, Mohammad Aliannejadi, Theodora Tsikrika, Stefanos Vrochidis, Anastasia Giachanou, Dan Li, Michalis Vlachos, Guglielmo Faggioli, Nicola Ferro |
Herausgeber (Verlag) | Springer Science and Business Media Deutschland GmbH |
Seiten | 251-275 |
Seitenumfang | 25 |
ISBN (elektronisch) | 978-3-031-42448-9 |
ISBN (Print) | 9783031424472 |
Publikationsstatus | Veröffentlicht - 11 Sept. 2023 |
Veranstaltung | 14th International Conference of the Cross-Language Evaluation Forum for European Languages - Thessaloniki, Griechenland Dauer: 18 Sept. 2023 → 21 Sept. 2023 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 14163 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Experimental IR Meets Multilinguality, Multimodality, and Interaction: 14th International Conference of the CLEF Association, CLEF 2023, Thessaloniki, Greece, September 18–21, 2023, Proceedings. Hrsg. / Avi Arampatzis; Evangelos Kanoulas; Mohammad Aliannejadi; Theodora Tsikrika; Stefanos Vrochidis; Anastasia Giachanou; Dan Li; Michalis Vlachos; Guglielmo Faggioli; Nicola Ferro. Springer Science and Business Media Deutschland GmbH, 2023. S. 251-275 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14163 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Overview of the CLEF–2023 CheckThat! Lab on Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority of News Articles and Their Source
AU - Barrón-Cedeño, Alberto
AU - Alam, Firoj
AU - Galassi, Andrea
AU - Da San Martino, Giovanni
AU - Nakov, Preslav
AU - Elsayed, Tamer
AU - Azizov, Dilshod
AU - Caselli, Tommaso
AU - Cheema, Gullal S.
AU - Haouari, Fatima
AU - Hasanain, Maram
AU - Kutlu, Mucahid
AU - Li, Chengkai
AU - Ruggeri, Federico
AU - Struß, Julia Maria
AU - Zaghouani, Wajdi
N1 - Funding Information: The work of F. Alam, M. Hasanain and W. Zaghouani is partially supported by NPRP 13S-0206-200281 and NPRP 14C-0916-210015 from the Qatar National Research Fund (a member of Qatar Foundation). The work of A. Galassi is supported by the European Commission NextGeneration EU programme, PNRRM4C2-Investimento 1.3, PE00000013-“FAIR” Spoke 8. The work of Fatima Haouari was supported by GSRA grant #GSRA6-1-0611-19074 from the Qatar National Research Fund. The work of Tamer Elsayed was made possible by NPRP grant #NPRP-11S-1204-170060 from the Qatar National Research Fund. The findings achieved herein are solely the responsibility of the authors.
PY - 2023/9/11
Y1 - 2023/9/11
N2 - We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.
AB - We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.
KW - Authority Finding
KW - Check-Worthiness
KW - Fact Checking
KW - Factuality of Reporting
KW - Political Bias
KW - Subjectivity
UR - http://www.scopus.com/inward/record.url?scp=85165624423&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-42448-9_20
DO - 10.1007/978-3-031-42448-9_20
M3 - Conference contribution
AN - SCOPUS:85165624423
SN - 9783031424472
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 251
EP - 275
BT - Experimental IR Meets Multilinguality, Multimodality, and Interaction
A2 - Arampatzis, Avi
A2 - Kanoulas, Evangelos
A2 - Aliannejadi, Mohammad
A2 - Tsikrika, Theodora
A2 - Vrochidis, Stefanos
A2 - Giachanou, Anastasia
A2 - Li, Dan
A2 - Vlachos, Michalis
A2 - Faggioli, Guglielmo
A2 - Ferro, Nicola
PB - Springer Science and Business Media Deutschland GmbH
T2 - 4th International Conference of the Cross-Language Evaluation Forum for European Languages
Y2 - 18 September 2023 through 21 September 2023
ER -