SPACE: Self-supervised Dual Preference Enhancing Network for Multimodal Recommendation

IEEE Transactions on Multimedia(2024)

引用 0|浏览2
暂无评分
摘要
Multimodal recommendation is an emerging task with the goal of improving the effectiveness of the recommendation system by utilizing multimodal data (images, texts, etc.). Most previous methods have struggled with the ability to mine item semantic relationships while guaranteeing accurate modeling of user modality preferences, resulting in low recommendation accuracy. To address this issue, this paper proposes a novel and effective Self-suPervised duAl preference enhanCing nEtwork for multimodal recommendation, named SPACE, which further mines user preferences towards historical interactions and multimodal features of items to obtain more precise user and item representation. Specifically, we design an interaction preference enhancing module to learn both interactive and latent semantic relationships between users and items. Then, a modality preference enhancing module is established by introducing self-supervised learning (SSL), which aims to strengthen the role of dominant modality-specific representation of items. Finally, the enhanced interaction and modality representations are fused, and the recommendation performance is largely improved by utilizing dual joint prediction. Extensive experiments are conducted on three real-world datasets, and the simulation results demonstrate that the proposed SPACE model outperforms the state-of-the-art multimodal recommendation methods.
更多
查看译文
关键词
Multimodal recommendation,self-supervised learning,preference enhancing,dual joint prediction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要