Investigating Textual Case-Based XAI.

Rosina O. Weber,Adam J. Johs, Jianfei Li, Kent Huang

ICCBR(2018)

引用 14|浏览15
暂无评分
摘要
This paper demonstrates how case-based reasoning (CBR) can be used for an explainable artificial intelligence (XAI) approach to justify solutions produced by an opaque learning method (i.e., target method), particularly in the context of unstructured textual data. Our general hypothesis is twofold: (1) There exists patterns in the relationship between problems and solutions and there should be data or a body of knowledge that describes how problems and solutions relate; and (2) the identification, manipulation, and learning of such patterns through case features can help create and reuse explanations for solutions produced by the target method. When the target method relies on neural network architectures (e.g., deep learning), the resulting latent space (i.e., word embeddings) becomes useful for finding patterns and semantic relatedness in textual data. In the proposed approach, case problems are input-output pairs from the target method, and case solutions are explanations. We exemplify our approach by explaining recommended citations from Citeomatic - a multi-layer neural-network architecture from the Allen Institute for Artificial Intelligence. Citation analysis is the body of knowledge that describes how query documents (i.e., inputs) relate to recommended citations (i.e., outputs). We build cases and similarity assessment to learn features that represent patterns between problems and solutions that can lead to the reuse of corresponding explanations. The illustrative implementation we present becomes an explanation-augmented citation recommender that targets human-computer trust.
更多
查看译文
关键词
Case-Based reasoning,Textual Case-Based reasoning,Explainable artificial intelligence,Semantic relatedness,Word embeddings,Citation recommendation,Human-Computer trust
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要