Leveraging Large Language Models for Relevance Judgments in Legal Case Retrieval
arxiv(2024)
摘要
Collecting relevant judgments for legal case retrieval is a challenging and
time-consuming task. Accurately judging the relevance between two legal cases
requires a considerable effort to read the lengthy text and a high level of
domain expertise to extract Legal Facts and make juridical judgments. With the
advent of advanced large language models, some recent studies have suggested
that it is promising to use LLMs for relevance judgment. Nonetheless, the
method of employing a general large language model for reliable relevance
judgments in legal case retrieval is yet to be thoroughly explored. To fill
this research gap, we devise a novel few-shot workflow tailored to the relevant
judgment of legal cases. The proposed workflow breaks down the annotation
process into a series of stages, imitating the process employed by human
annotators and enabling a flexible integration of expert reasoning to enhance
the accuracy of relevance judgments. By comparing the relevance judgments of
LLMs and human experts, we empirically show that we can obtain reliable
relevance judgments with the proposed workflow. Furthermore, we demonstrate the
capacity to augment existing legal case retrieval models through the synthesis
of data generated by the large language model.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要