The task aims at finding the best techniques to identify and categorize propagandistic tweets from governmental and diplomatic sources on a dataset of tweets in Englsih, posted by authorities of China, Russia, United States and the European Union. The task seeks to classify the tweet into four clusters of propaganda techniques: appeal to commonality, discrediting the opponent, loaded language, appeal to authority.
Publication
Pablo Moral, Guillermo Marco, Julio Gonzalo, Jorge Carrillo-de-Albornoz, Iván Gonzalo-Verdugo (2023) Overview of DIPROMATS 2023: automatic detection and characterization of propaganda techniques in messages from diplomats and authorities of world powers. Procesamiento del Lenguaje Natural, Revista nº 71, septiembre de 2023, pp. 397-407.
Language
English
NLP topic
Abstract task
Dataset
Year
2023
Publication link
Ranking metric
ICM
Task results
System | F1 Sort ascending | Accuracy | MacroF1 | Pearson correlation | ICM |
---|---|---|---|---|---|
Roberta large | 0.5204 | 0.5204 | 0.5204 | 0.5204 | 0.52 |
Xlm roberta large | 0.4867 | 0.4867 | 0.4867 | 0.4867 | 0.49 |
Roberta base | 0.4811 | 0.4811 | 0.4811 | 0.4811 | 0.48 |
Bert base cased | 0.4468 | 0.4468 | 0.4468 | 0.4468 | 0.45 |
Ixa ehu ixambert base cased | 0.4430 | 0.4430 | 0.4430 | 0.4430 | 0.44 |
Xlm roberta base | 0.4329 | 0.4329 | 0.4329 | 0.4329 | 0.43 |
Bert base multilingual cased | 0.4266 | 0.4266 | 0.4266 | 0.4266 | 0.43 |
Distilbert base uncased | 0.4054 | 0.4054 | 0.4054 | 0.4054 | 0.41 |
Distilbert base multilingual cased | 0.3794 | 0.3794 | 0.3794 | 0.3794 | 0.38 |