ControlRec: Bridging the Semantic Gap between Language Model and Personalized Recommendation
CoRR(2023)
摘要
The successful integration of large language models (LLMs) into
recommendation systems has proven to be a major breakthrough in recent studies,
paving the way for more generic and transferable recommendations. However, LLMs
struggle to effectively utilize user and item IDs, which are crucial
identifiers for successful recommendations. This is mainly due to their
distinct representation in a semantic space that is different from the natural
language (NL) typically used to train LLMs. To tackle such issue, we introduce
ControlRec, an innovative Contrastive prompt learning framework for
Recommendation systems. ControlRec treats user IDs and NL as heterogeneous
features and encodes them individually. To promote greater alignment and
integration between them in the semantic space, we have devised two auxiliary
contrastive objectives: (1) Heterogeneous Feature Matching (HFM) aligning item
description with the corresponding ID or user's next preferred ID based on
their interaction sequence, and (2) Instruction Contrastive Learning (ICL)
effectively merging these two crucial data sources by contrasting probability
distributions of output sequences generated by diverse tasks. Experimental
results on four public real-world datasets demonstrate the effectiveness of the
proposed method on improving model performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要