Document Type
Conference Proceeding
Publication Title
CEUR Workshop Proceedings
Abstract
We present an overview of CheckThat! Lab’s 2023 Task 1, which is part of CLEF-2023. Task 1 asks to determine whether a text item, or a text coupled with an image, is check-worthy. This task places a special emphasis on COVID-19, political debates and transcriptions, and it is conducted in three languages: Arabic, English, and Spanish. A total of 15 teams participated, and most submissions managed to achieve significant improvements over the baselines using Transformer-based models. Out of these, seven teams participated in the multimodal subtask (1A), and 12 teams participated in the Multigenre subtask (1B), collectively submitting 155 official runs for both subtasks. Across both subtasks, approaches that targeted multiple languages, either individually or in conjunction, generally achieved the best performance. We provide a description of the dataset and the task setup, including the evaluation settings, and we briefly overview the participating systems. As is customary in the CheckThat! lab, we have release all datasets from the lab as well as the evaluation scripts to the research community. This will enable further research on finding relevant check-worthy content that can assist various stakeholders such as fact-checkers, journalists, and policymakers.
First Page
219
Last Page
235
Publication Date
9-2023
Keywords
Check-worthiness, fact-checking, multilinguality, multimodality
Recommended Citation
F. Alam and A. Barrón-Cedeño and G. Cheema and G. Shahi and S. Hakimov and M. Hasanain and C. Li and R. Míguez and H. Mubarak and W. Zaghouani and P. Nakov, "Overview of the CLEF-2023 CheckThat! Lab Task 1 on Check-Worthiness of Multimodal and Multigenre Content," CEUR Workshop Proceedings, vol. 3497, pp. 219 - 235, Sep 2023.
Additional Links
Publisher link: https://ceur-ws.org/Vol-3497/paper-019.pdf
Comments
Archived thanks to CEUR-WS.org
Open Access
License: CC by 4.0
Uploaded: April 03, 2024