Robust Model-Based Reliability Approach to Tackle Shilling Attacks in Collaborative Filtering Recommender Systems.
IEEE ACCESS(2019)
摘要
As the use of recommender systems becomes generalized in society, the interest in varying the orientation of their recommendations is increasing. There are shilling attacks' strategies that introduce malicious profiles in collaborative filtering recommender systems in order to promote the own products or services or to discredit those of the competition. Academic research against shilling attacks has been focused in statistical approaches to detect the unusual patterns in user ratings. Nowadays, there is a growing research area focused on the design of robust machine learning methods to neutralize the malicious profiles inserted into the system. This paper proposes an innovative robust method, based on matrix factorization, to neutralize the shilling attacks. Our method obtains the reliability value associated with each prediction of a user to an item. By monitoring the unusual reliability variations in the items prediction, we can avoid promoting the shilling predictions to the erroneous recommendations. This paper openly provides more than 13 000 individual experiments involving a wide range of attack strategies, both push, and nuke, in order to test the proposed approach. The results show that the proposed method is able to neutralize most of the existing attacks; its performance only decreases in the not relevant situations: when the attack size is not large enough to effectively affect the recommendations provided by the system.
更多查看译文
关键词
Recommender systems,shilling attacks,collaborative filtering,reliability,malicious profiles,matrix factorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络