Take a Fresh Look at Recommender Systems from an Evaluation Standpoint

arxiv(2023)

引用 9|浏览55
暂无评分
摘要
Recommendation has become a prominent area of research in the field of Information Retrieval (IR). Evaluation is also a traditional research topic in this community. Motivated by a few counterintuitive observations reported in recent studies, this perspectives paper takes a fresh look at recommender systems from an evaluation standpoint. Rather than examining metrics like recall, hit rate, or NDCG, or perspectives like novelty and diversity, the key focus here is on how these metrics are calculated when evaluating a recommender algorithm. Specifically, the commonly used train/test data splits and their consequences are re-examined. We begin by examining common data splitting methods, such as random split or leave-one-out, and discuss why the popularity baseline is poorly defined under such splits. We then move on to explore the two implications of neglecting a global timeline during evaluation: data leakage and oversimplification of user preference modeling. Afterwards, we present new perspectives on recommender systems, including techniques for evaluating algorithm performance that more accurately reflect real-world scenarios, and possible approaches to consider decision contexts in user preference modeling.
更多
查看译文
关键词
Recommendation,global timeline,practical evaluation,user preference modeling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要