Semantic Textual Similarity ES-ES 2017

Evaluates the degree to which two Spanish sentences are semantically equivalent to each other. Similarity scores range from 0 for no overlap in meaning to 5 for equivalence of meaning. Values in between reflect interpretable levels of partial overlap in meaning.

 

Publication
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Language
Spanish
NLP topic
Abstract task
Year
2017
Ranking metric
Pearson correlation

Task results

System F1 Accuracy MacroF1 Pearson correlation Sort ascending ICM
ECNU 0.8559
BIT 0.8499
FCICU 0.8489
FCICU 0.8484
ECNU 0.8456
Dccuchile bert base spanish wwm cased 0.8330 0.8330 0.8330 0.8330 0.83
Xlm roberta large 0.8287 0.8287 0.8287 0.8287 0.83
PlanTL GOB ES roberta large bne 0.8232 0.8232 0.8232 0.8232 0.82
Ixa ehu ixambert base cased 0.8120 0.8120 0.8120 0.8120 0.81
PlanTL GOB ES roberta base bne 0.8096 0.8096 0.8096 0.8096 0.81

If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.