Understanding Language Modeling Paradigm Adaptations in Recommender Systems: Lessons Learned and Open Challenges
arxiv(2024)
摘要
The emergence of Large Language Models (LLMs) has achieved tremendous success
in the field of Natural Language Processing owing to diverse training paradigms
that empower LLMs to effectively capture intricate linguistic patterns and
semantic representations. In particular, the recent "pre-train, prompt and
predict" training paradigm has attracted significant attention as an approach
for learning generalizable models with limited labeled data. In line with this
advancement, these training paradigms have recently been adapted to the
recommendation domain and are seen as a promising direction in both academia
and industry. This half-day tutorial aims to provide a thorough understanding
of extracting and transferring knowledge from pre-trained models learned
through different training paradigms to improve recommender systems from
various perspectives, such as generality, sparsity, effectiveness and
trustworthiness. In this tutorial, we first introduce the basic concepts and a
generic architecture of the language modeling paradigm for recommendation
purposes. Then, we focus on recent advancements in adapting LLM-related
training strategies and optimization objectives for different recommendation
tasks. After that, we will systematically introduce ethical issues in LLM-based
recommender systems and discuss possible approaches to assessing and mitigating
them. We will also summarize the relevant datasets, evaluation metrics, and an
empirical study on the recommendation performance of training paradigms.
Finally, we will conclude the tutorial with a discussion of open challenges and
future directions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要