Semantic Textual Similarity EN-EN 2017 (EN)

Evaluates the degree to which two English sentences are semantically equivalent to each other. Similarity scores range from 0 for no overlap in meaning to 5 for equivalence of meaning. Values in between reflect interpretable levels of partial overlap in meaning.

Publication
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
Language
English
NLP topic
Abstract task
Year
2017
Ranking metric
Pearson correlation

If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.