CEUR Workshop Proceedings
We present an overview of CheckThat! lab 2022 Task 1, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). Task 1 asked to predict which posts in a Twitter stream are worth fact-checking, focusing on COVID-19 and politics in six languages: Arabic, Bulgarian, Dutch, English, Spanish, and Turkish. A total of 19 teams participated and most submissions managed to achieve sizable improvements over the baselines using Transformer-based models such as BERT and GPT-3. Across the four subtasks, approaches that targetted multiple languages (be it individually or in conjunction, in general obtained the best performance. We describe the dataset and the task setup, including the evaluation settings, and we give a brief overview of the participating systems. As usual in the CheckThat! lab, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research on finding relevant tweets that can help different stakeholders such as fact-checkers, journalists, and policymakers. © 2022 Copyright for this paper by its authors.
Check-Worthiness Estimation, Computational Journalism, COVID-19, Fact-Checking, Social Media Verification, Veracity, Laboratories, Social networking (online)
P. Nakov et al, "Overview of the CLEF-2022 CheckThat! Lab Task 1 on Identifying Relevant Claims in Tweets", in 2022 Conf. and Labs of the Evaluation Forum (CLEF 2022), Bologna, Sept 2022, pp. 368-392, available online: http://ceur-ws.org/Vol-3180/paper-28.pdf
Article available on CEUR Workshop Proceedings site
Archived with thanks to CEUR Workshop Proceedings
License: CC by 4.0
Uploaded 14 September 2022