Multilingual Complex Named Entity Recognition 2022

The task consists in detecting and labeling  semantically ambiguous and complex entities in short and low-context settings. Complex NEs, like the titles of creative works (movie/book/song/software names) are not simple nouns and are harder to recognize. They can take the form of any linguistic constituent, like an imperative clause (“Dial M for Murder”), and do not look like traditional NEs (Person names, locations, organizations).

The task is performed on the MULTICONER dataset (Malmasi et al., 2022). MULTICONER provides data from three domains (Wikipedia sentences, questions, and search queries) across 11 different languages, which are used to define 11 monolingual subsets of the shared task. Additionally, the dataset has multilingual and code-mixed subsets.

The following named entities are tagged: names of people, location or physical facilities, corporations and businesses, all other groups, consumer products, titles of creative works like movie, song, and book titles.

Publication
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition (MultiCoNER). In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1412–1437, Seattle, United States. Association for Computational Linguistics.
Language
English
Abstract task
Year
2022
Ranking metric
F1

Task results

System Precision Recall F1 Sort ascending CEM Accuracy MacroPrecision MacroRecall MacroF1 RMSE MicroPrecision MicroRecall MicroF1 MAE MAP UAS LAS MLAS BLEX Pearson correlation Spearman correlation MeasureC BERTScore EMR Exact Match F0.5 Hierarchical F ICM MeasureC Propensity F Reliability Sensitivity Sentiment Graph F1 WAC b2 erde30 sent weighted f1
Roberta large 0.7012 0.7012 0.7012 0.7012 0.70
Xlm roberta large 0.7007 0.7007 0.7007 0.7007 0.70
Roberta base 0.6577 0.6577 0.6577 0.6577 0.66
Distilbert base uncased 0.6563 0.6563 0.6563 0.6563 0.66
Bert base multilingual cased 0.6252 0.6252 0.6252 0.6252 0.63
Xlm roberta base 0.6080 0.6080 0.6080 0.6080 0.61
Ixa ehu ixambert base cased 0.6075 0.6075 0.6075 0.6075 0.61
Bert base cased 0.5993 0.5993 0.5993 0.5993 0.60
Distilbert base multilingual cased 0.5693 0.5693 0.5693 0.5693 0.57

If you have published a result better than those on the list, send a message to odesia-comunicacion@lsi.uned.es indicating the result and the DOI of the article, along with a copy of it if it is not published openly.