High-level preferences as positive examples in contrastive learning for multi-interest sequential recommendation

World Wide Web(2024)

引用 0|浏览1
暂无评分
摘要
The sequential recommendation task based on the multi-interest framework aims to model multiple interests of users from different aspects to predict their future interactions. However, researchers rarely consider the differences in features between the interests generated by the model. In extreme cases, all interest capsules have the same meaning, leading to the failure of modeling users with multiple interests. To address this issue, we propose the High-level Preferences as positive examples in Contrastive Learning for multi-interest Sequence Recommendation framework (HPCL4SR), which uses contrastive learning to distinguish differences in interests based on user item interaction information. In order to find high-quality comparative examples, this paper introduces the category information to construct a global graph, learning the association between categories for high-level preference interest of users. Then, a multi-layer perceptron is used to adaptively fuse the low-level preference interest features of the user’s items and the high-level preference interest features of the categories. Finally, user multi-interest contrastive samples are obtained through item sequence information and corresponding categories, which are fed into contrastive learning to optimize model parameters and generate multi-interest representations that are more in line with the user sequence. In addition, when modeling the user’s item sequence information, in order to increase the differentiation between item representations, the category of the item is used to supervise the learning process. Extensive experiments on three real datasets demonstrate that our method outperforms existing multi-interest recommendation models.
更多
查看译文
关键词
Multi-interest learning,Sequential recommendation,Category information,Contrastive learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要