The task aims to predict whether a given tweet is worth fact-checking, focusing on COVID-19 and politics. This classification task is defined with binary labels: Yes and No.
Publication
Preslav Nakov, Alberto Barrón-Cedeño, Giovanni Da San Martino, Firoj Alam, Rubén Míguez, Tommaso Caselli, Mucahid Kutlu, Wajdi Zaghouani, Chengkai Li, Shaden Shaar, Hamdy Mubarak, Alex Nikolov, Yavuz Selim Kartal (2022) Overview of the CLEF-2022 CheckThat! Lab Task 1 on Identifying Relevant Claims in Tweets. Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings. http://ceur-ws.org/Vol-3180/paper-28.pdf
Language
Spanish
English
NLP topic
Abstract task
Year
2022
Publication link
Ranking metric
F1
Task results
System | Precision | Recall | F1 Sort ascending | CEM | Accuracy | MacroPrecision | MacroRecall | MacroF1 | RMSE | MicroPrecision | MicroRecall | MicroF1 | MAE | MAP | UAS | LAS | MLAS | BLEX | Pearson correlation | Spearman correlation | MeasureC | BERTScore | EMR | Exact Match | F0.5 | Hierarchical F | ICM | MeasureC | Propensity F | Reliability | Sensitivity | Sentiment Graph F1 | WAC | b2 | erde30 | sent | weighted f1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NUS-IDS | 0.5710 | ||||||||||||||||||||||||||||||||||||
PoliMi-FlatEarthers | 0.3230 | ||||||||||||||||||||||||||||||||||||
Z-Index | 0.3030 |