Transfer Learning in Cross-Domain Sequential Recommendation

Information Sciences(2024)

引用 0|浏览3
暂无评分
摘要
Sequential recommendation captures users' dynamic preferences by modeling the sequential information of their behaviors. However, most existing works only focus on users' behavior sequences in a single domain, and when there is insufficient data in the target domain data, the recommendation performance may not be satisfactory. We notice that a user's interests are usually diverse, for which the items he/her interacts with in a period of time may be from multiple domains. Moreover, there are also item transition patterns across sequences from different domains, which means that a user's interaction in one domain may affect his/her interaction in the other domains at the next time. In this paper, we aim to improve the performance of sequential recommendation in the target domain by introducing users' behavior sequences from multiple source domains, and propose a novel solution named transfer via joint attentive preference learning (TJAPL). Specifically, we tackle the studied problem from the perspective of transfer learning and attentive preference learning (APL). For target-domain APL, we adopt the self-attention mechanism to capture the users' dynamic preferences in the target domain. Furthermore, to address the scarcity challenge posed by limited target-domain data, we introduce users' behavioral sequences in the source domain, and devise cross-domain user APL to transfer and share the users' overall preferences from multiple source domains to the target domain. We also design cross-domain local APL that specializes in capturing the item transition patterns across different domains for knowledge transfer. These modules are all based on the attention mechanism and thus can accelerate the training by parallel computation. Notice that our TJAPL can be applied to scenarios with multiple source domains, while transferring knowledge from multiple domains is potentially helpful in practical applications. Extensive empirical studies indicate that our TJAPL significantly outperforms ten recent and competitive baselines.
更多
查看译文
关键词
Cross-Domain Recommendation,Sequential Recommendation,Attentive Learning,Transfer Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要