A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation

RepSys '13: Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation(2013)

引用 184|浏览0
暂无评分
摘要
Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要