Large Language Models Enhanced Collaborative Filtering
arxiv(2024)
摘要
Recent advancements in Large Language Models (LLMs) have attracted
considerable interest among researchers to leverage these models to enhance
Recommender Systems (RSs). Existing work predominantly utilizes LLMs to
generate knowledge-rich texts or utilizes LLM-derived embeddings as features to
improve RSs. Al- though the extensive world knowledge embedded in LLMs
generally benefits RSs, the application can only take limited number of users
and items as inputs, without adequately exploiting collaborative filtering
information. Considering its crucial role in RSs, one key challenge in
enhancing RSs with LLMs lies in providing better collaborative filtering
information through LLMs. In this paper, drawing inspiration from the
in-context learning and chain of thought reasoning in LLMs, we propose the
Large Language Models enhanced Collaborative Filtering (LLM-CF) framework,
which distils the world knowledge and reasoning capabilities of LLMs into
collaborative filtering. We also explored a concise and efficient
instruction-tuning method, which improves the recommendation capabilities of
LLMs while preserving their general functionalities (e.g., not decreasing on
the LLM benchmark). Comprehensive experiments on three real-world datasets
demonstrate that LLM-CF significantly enhances several backbone recommendation
models and consistently outperforms competitive baselines, showcasing its
effectiveness in distilling the world knowledge and reasoning capabilities of
LLM into collaborative filtering.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要