Towards Explainability in Legal Outcome Prediction Models
arxiv(2024)
摘要
Current legal outcome prediction models - a staple of legal NLP - do not
explain their reasoning. However, to employ these models in the real world,
human legal actors need to be able to understand their decisions. In the case
of common law, legal practitioners reason towards the outcome of a case by
referring to past case law, known as precedent. We contend that precedent is,
therefore, a natural way of facilitating explainability for legal NLP models.
In this paper, we contribute a novel method for identifying the precedent
employed by legal outcome prediction models. Furthermore, by developing a
taxonomy of legal precedent, we are able to compare human judges and our models
with respect to the different types of precedent they rely on. We find that
while the models learn to predict outcomes reasonably well, their use of
precedent is unlike that of human judges.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要