Few shot clinical entity recognition in three languages: Masked language models outperform LLM prompting
CoRR(2024)
摘要
Large Language Models are becoming the go-to solution for many natural
language processing tasks, including in specialized domains where their
few-shot capacities are expected to yield high performance in low-resource
settings. Herein, we aim to assess the performance of Large Language Models for
few shot clinical entity recognition in multiple languages. We evaluate named
entity recognition in English, French and Spanish using 8 in-domain (clinical)
and 6 out-domain gold standard corpora. We assess the performance of 10
auto-regressive language models using prompting and 16 masked language models
used for text encoding in a biLSTM-CRF supervised tagger. We create a few-shot
set-up by limiting the amount of annotated data available to 100 sentences. Our
experiments show that although larger prompt-based models tend to achieve
competitive F-measure for named entity recognition outside the clinical domain,
this level of performance does not carry over to the clinical domain where
lighter supervised taggers relying on masked language models perform better,
even with the performance drop incurred from the few-shot set-up. In all
experiments, the CO2 impact of masked language models is inferior to that of
auto-regressive models. Results are consistent over the three languages and
suggest that few-shot learning using Large language models is not production
ready for named entity recognition in the clinical domain. Instead, models
could be used for speeding-up the production of gold standard annotated data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要