Interpretable Explanations for Probabilistic Inference in Markov Logic

IEEE BigData(2021)

引用 0|浏览12
暂无评分
摘要
Markov Logic Networks (MLNs) represent relational knowledge using a combination of first-order logic and probabilistic models. In this paper, we develop an approach to explain the results of probabilistic inference in MLNs. Unlike approaches such as LIME and SHAP that explain black-box classifiers, explaining M LN inference is harder since the data is interconnected. We develop an explanation framework that computes importance weights for MLN formulas based on their influence on the marginal likelihood. However, it turns out that computing these importance weights exactly is a hard problem and even approximate sampling methods are unreliable when the MLN is large resulting in non-interpretable explanations. Therefore, we develop an approach where we reduce the large MLN into simpler coalitions of formulas that approximately preserve relational dependencies and generate explanations based on these coalitions. We then weight explanations from different coalitions and combine them into a single explanation. Our experiments illustrate that our approach generates more interpretable explanations in several text processing problems as compared to other state-of-the-art methods.
更多
查看译文
关键词
Explainable AI,Markov Logic Networks,Statistical Relational Models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要