Explaining Link Predictions in Knowledge Graph Embedding Models with Influential Examples

arxiv(2022)

引用 0|浏览16
暂无评分
摘要
We study the problem of explaining link predictions in the Knowledge Graph Embedding (KGE) models. We propose an example-based approach that exploits the latent space representation of nodes and edges in a knowledge graph to explain predictions. We evaluated the importance of identified triples by observing progressing degradation of model performance upon influential triples removal. Our experiments demonstrate that this approach to generate explanations outperforms baselines on KGE models for two publicly available datasets.
更多
查看译文
关键词
knowledge graph,link predictions,influential examples,models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要