LOKI: A Practical Data Poisoning Attack Framework Against Next Item Recommendations

IEEE Transactions on Knowledge and Data Engineering(2023)

引用 0|浏览51
暂无评分
摘要
Due to the openness of the online platform, recommendation systems are vulnerable to data poisoning attacks, where malicious samples are injected into the training set of the recommendation system to manipulate its recommendation results. Existing attack approaches are either based on heuristic rules or designed against specific recommendation approaches. The former suffers unsatisfactory performance, while the latter requires strong knowledge of the target system. In this paper, we propose a practical poisoning attack approach named LOKI against blackbox recommendation systems. The proposed LOKI utilizes the reinforcement learning algorithm to train the attack agent, which can be used to generate user behavior samples for data poisoning. In real-world recommendation systems, the cost of retraining recommendation models is high, and the interaction frequency between users and a recommendation system is restricted. Thus, we propose to let the agent interact with a recommender simulator instead of the target recommendation system and leverage the transferability of the generated adversarial samples to poison the target system. We also use the influence function to efficiently estimate the influence of injected samples on recommendation results, without re-training the models. Extensive experiments on multiple datasets against four representative recommendation models show that the proposed LOKI outperformances existing method. We also discuss the characteristics of vulnerable users/items, and evaluate whether anomaly detection methods can be used to mitigate the impact of data poisoning attacks.
更多
查看译文
关键词
Adversarial learning,recommendation system,data poisoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要