DIPROMATS 2023: Coarse propaganda characterization

The task aims at  finding the best techniques to identify and categorize propagandistic tweets from governmental and diplomatic sources on a  dataset of 9501 tweets in Spanish, posted by authorities of China, Russia, United States and the European Union. The task seeks to classify the tweet into four clusters of propaganda techniques: appeal to commonality, discrediting the opponent, loaded language, appeal to authority.

Publication
Pablo Moral, Guillermo Marco, Julio Gonzalo, Jorge Carrillo-de-Albornoz, Iván Gonzalo-Verdugo (2023) Overview of DIPROMATS 2023: automatic detection and characterization of propaganda techniques in messages from diplomats and authorities of world powers. Procesamiento del Lenguaje Natural, Revista nº 71, septiembre de 2023, pp. 397-407.
Language
Spanish
Abstract task
Dataset
Year
2023
Ranking metric
ICM

Task results

System Precision Recall F1 Sort ascending CEM Accuracy MacroPrecision MacroRecall MacroF1 RMSE MicroPrecision MicroRecall MicroF1 MAE MAP UAS LAS MLAS BLEX Pearson correlation Spearman correlation MeasureC BERTScore EMR Exact Match F0.5 Hierarchical F ICM MeasureC Propensity F Reliability Sensitivity Sentiment Graph F1 WAC b2 erde30 sent weighted f1
Hermes-3-Llama-3.1-8B_2 0.5677 0.5677 0.5677 0.5677 0.57
XLM-RoBERTa-large 0.5425 0.5425 0.5425 0.5425 0.54
XLM-RoBERTa-large-2 0.5425 0.5425 0.5425 0.5425 0.54
XLM-RoBERTa-large-v3 0.5425 0.5425 0.5425 0.5425 0.54
Hermes-3-Llama-3.1-8B 0.5379 0.5379 0.5379 0.5379 0.54
Xlm roberta large 0.5343 0.5343 0.5343 0.5343 0.53
Gemma-2B-IT 0.5283 0.5283 0.5283 0.5283 0.53
PlanTL GOB ES roberta large bne 0.5173 0.5173 0.5173 0.5173 0.52
PlanTL GOB ES roberta base bne 0.4906 0.4906 0.4906 0.4906 0.49
Dccuchile bert base spanish wwm cased 0.4874 0.4874 0.4874 0.4874 0.49

If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.