Evaluates the degree to which two Spanish sentences are semantically equivalent to each other. Similarity scores range from 0 for no overlap in meaning to 5 for equivalence of meaning. Values in between reflect interpretable levels of partial overlap in meaning.
Publication
              Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
          Language
          Spanish
              NLP topic
          
      Abstract task
              
          Year
              2017
          Publication link
              
          Ranking metric
              Pearson correlation
          Task results
| System | Pearson correlation Sort ascending | 
|---|---|
| ECNU | 0.8559 | 
| BIT | 0.8499 | 
| FCICU | 0.8489 | 
| FCICU | 0.8484 | 
| ECNU | 0.8456 | 
| Dccuchile bert base spanish wwm cased | 0.8330 | 
| Xlm roberta large | 0.8287 | 
| PlanTL GOB ES roberta large bne | 0.8232 | 
| Ixa ehu ixambert base cased | 0.8120 | 
| PlanTL GOB ES roberta base bne | 0.8096 | 
Pagination
- Page 1
- Next page

