Document-Level In-Context Few-Shot Relation Extraction via Pre-Trained Language Models
CoRR(2023)
摘要
Relation extraction aims at inferring structured human knowledge from textual
documents. State-of-the-art methods based on language models commonly have two
limitations: (1) they require named entities to be either given as input or
infer them, which introduces additional noise, and (2) they require human
annotations of documents. As a remedy, we present a novel framework for
document-level in-context few-shot relation extraction via pre-trained language
models. We achieve crucial benefits in that we eliminate the need for both
named entity recognition and human annotation of documents. Unlike existing
methods based on fine-tuning, our framework is flexible in that it can be
easily updated for a new set of relations without re-training. We evaluate our
framework using DocRED, the largest publicly available dataset for
document-level relation extraction, and demonstrate that our framework achieves
state-of-the-art performance. Finally, we show that our framework actually
performs much better than the original labels from the development set of
DocRED. To the best of our knowledge, we are the first to reformulate the
document-level relation extraction task as a tailored in-context few-shot
learning paradigm.
更多查看译文
关键词
relation,language models,in-context,few-shot,pre-trained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要