A reading comprehension task on a dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. Systems must select the answer from all possible spans in the passage, thus needing to cope with a fairly large number of candidates.
Publication
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
Language
English
URL Task
NLP topic
Abstract task
Dataset
Year
2016
Publication link
Ranking metric
F1
Task results
System | F1 Sort ascending |
---|---|
Roberta large | 0.8724 |
Xlm roberta large | 0.8581 |
Roberta base | 0.8427 |
Ixa ehu ixambert base cased | 0.8187 |
Bert base multilingual cased | 0.8059 |
Xlm roberta base | 0.7998 |
Bert base cased | 0.7968 |
Distilbert base uncased | 0.7602 |
Distilbert base multilingual cased | 0.7467 |