Explainable Bayesian Network Query Results via Natural Language Generation Systems

ICAIL '19: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law(2019)

引用 11|浏览374
暂无评分
摘要
Bayesian networks (BNs) are an important modelling technique used to support certain types of decision making in law and forensics. Their value lies in their ability to infer the rational implications of probabilistic knowledge and beliefs, a task that human decision makers struggle with. However, their use is controversial. One of the main obstacles to the more widespread use of BNs is the difficulty to acquire good explanations of the results obtained with BNs. While useful techniques exist to visualise, verbalise or abstract BNs and the inner workings of belief propagation algorithms, these techniques provide generic, one-size-fits-all explanations, that have, thus far, failed to stem the criticism of lack of explainable BN results. Building on the qualified support graph method introduced in earlier work, this paper outlines how a natural language generation system can be constructed to explain Bayesian inference. This constitutes a novel approach to BN explanation that has the potential to produce more focussed and compelling explanations of Bayesian inference as the narratives such a system produces can be tailored to address specific communicative goals and, by extension, the needs of the user.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要