Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning
arxiv(2024)
摘要
Differentiating relationships between entity pairs with limited labeled
instances poses a significant challenge in few-shot relation classification.
Representations of textual data extract rich information spanning the domain,
entities, and relations. In this paper, we introduce a novel approach to
enhance information extraction combining multiple sentence representations and
contrastive learning. While representations in relation classification are
commonly extracted using entity marker tokens, we argue that substantial
information within the internal model representations remains untapped. To
address this, we propose aligning multiple sentence representations, such as
the [CLS] token, the [MASK] token used in prompting, and entity marker tokens.
Our method employs contrastive learning to extract complementary discriminative
information from these individual representations. This is particularly
relevant in low-resource settings where information is scarce. Leveraging
multiple sentence representations is especially effective in distilling
discriminative information for relation classification when additional
information, like relation descriptions, are not available. We validate the
adaptability of our approach, maintaining robust performance in scenarios that
include relation descriptions, and showcasing its flexibility to adapt to
different resource constraints.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要