DIANN 2023: Disability detection

The task consists on detecting disability mentions in abstracts of biomedical articles in English. The task follows the guidelines established in the IberLEF 2018 competition "Disability annotation on documents from the biomedical domain (DIANN)".

Publication
Hermenegildo Fabregat, Juan Martínez-Romo, and Lourdes Araujo. 2018b. Overview of the DIANN task: Disability annotation task. In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018), Sevilla, Spain, September 18th, 2018, volume 2150 of CEUR Workshop Proceedings, pages 1–14. CEUR-WS.org.
Language
English
Abstract task
Dataset
Year
2023
Ranking metric
F1

Task results

System F1 Sort ascending Accuracy MacroF1 Pearson correlation ICM
Roberta large 0.7982 0.7982 0.7982 0.7982 0.80
Xlm roberta large 0.7740 0.7740 0.7740 0.7740 0.77
Ixa ehu ixambert base cased 0.7695 0.7450 0.7450 0.7695 0.75
Roberta base 0.7612 0.7612 0.7612 0.7612 0.76
Xlm roberta base 0.7438 0.7438 0.7438 0.7438 0.74
Bert base multilingual cased 0.7384 0.7384 0.7384 0.7384 0.74
Bert base cased 0.7364 0.7364 0.7364 0.7364 0.74
Distilbert base uncased 0.6966 0.6966 0.6966 0.6966 0.70
Distilbert base multilingual cased 0.6950 0.6950 0.6950 0.6950 0.69

If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.