Stealthy Targeted Data Poisoning Attack On Knowledge Graphs

2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021)(2021)

引用 6|浏览34
暂无评分
摘要
A host of different KG embedding techniques have emerged recently and have been empirically shown to be very effective in accurately predicting missing facts in a KG, thus improving its coverage and quality. Unfortunately, embedding techniques can fall prey to adversarial data poisoning attack. In this form of attack, facts may be added to or deleted from a KG, called performing perturbations, that results in the manipulation of the plausibility of target facts in a KG. While recent works confirm this intuition, the attacks considered there ignore the risk of exposure. Intuitively, an attack is of limited value if it is highly likely to be caught, i.e., exposed. To address this, we introduce a notion of the exposure risk and propose a novel problem of attacking a KG by means of perturbations where the goal is to maximize the manipulation of the target fact's plausibility while keeping the risk of exposure under a given budget. We design a deep reinforcement learning-based framework, called RATA, that learns to use low-risk perturbations without compromising on the performance, i.e., manipulation of target fact plausibility. We test the performance of RATA against recently proposed strategies for KG attacks, on two different benchmark datasets and on different kinds of target facts. Our experiments show that RATA achieves state-of-the-art performance even while using a fraction of the risk.
更多
查看译文
关键词
exposure risk,low-risk perturbations,KG attacks,benchmark datasets,stealthy targeted data poisoning attack,knowledge graphs,embedding techniques,adversarial data poisoning attack,deep reinforcement learning,RATA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要