Evaluating Pre-training Strategies for Collaborative Filtering

UMAP(2023)

引用 0|浏览7
暂无评分
摘要
Pre-training is essential for effective representation learning models, especially in natural language processing and computer vision-related tasks. The core idea is to learn representations, usually through unsupervised or self-supervised approaches on large and generic source datasets, and use those pre-trained representations (aka embeddings) as initial parameter values during training on the target dataset. Seminal works in this area show that pre-training can act as a regularization mechanism placing the model parameters in regions of the optimization landscape closer to better local minima than random parameter initialization. However, no systematic studies evaluate the effectiveness of pre-training strategies on model-based collaborative filtering. This paper conducts a broad set of experiments to evaluate different pre-training strategies for collaborative filtering using Matrix Factorization (MF) as the base model. We show that such models equipped with pre-training in a transfer learning setting can vastly improve the prediction quality compared to the standard random parameter initialization baseline, reaching state-of-the-art results in standard recommender systems benchmarks. We also present alternatives for the out-of-vocabulary item problem (i.e., items present in target but not in source datasets) and show that pre-training in the context of MF acts as a regularizer, explaining the improvement in model generalization.
更多
查看译文
关键词
model initialization,transfer learning,collaborative filtering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要