Similarity-Based Explanations meet Matrix Factorization via Structure-Preserving Embeddings

Intelligent User Interfaces(2022)

引用 1|浏览38
暂无评分
摘要
ABSTRACT Embeddings are core components of modern model-based Collaborative Filtering (CF) methods, such as Matrix Factorization (MF) and Deep Learning variations. In essence, embeddings are mappings of the original sparse representation of categorical features (e.g., user and items) to dense low-dimensional representations. A well-known limitation of such methods is that the learned embeddings are opaque and hard to explain to the users. On the other hand, a key feature of simpler KNN-based CF models (aka user/item-based CF) is that they naturally yield similarity-based explanations, i.e., similar users/items as evidence to support model recommendations. Unlike related works that try to attribute explicit meaning (via metadata) to the learned embeddings, in this paper, we propose to equip the learned embeddings of MF with meaningful similarity-based explanations. First, we show that the learned user/item embeddings of MF do not preserve the distances between users (or items) in the original rating matrix. Next, we propose a novel approach that initializes Stochastic Gradient Descent (SGD) with user/item embeddings that preserve the structural properties of the original input data. We conduct a broad set of experiments and show that our method enables explanations, very similar to the ones provided by KNN-based approaches, without harming the prediction performance. Moreover, we show that fine-tuning the structure-preserving embeddings may unlock better local minima in the optimization space, leading simple vanilla MF to reach competitive performances with the best-known models for the rating prediction task.
更多
查看译文
关键词
explanations,transparency,matrix factorization,model initialization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要