Report on the SIGIR 2013 workshop on benchmarking adaptive retrieval and recommender systems.

ACM SIGIR Forum(2013)

引用 1|浏览15
暂无评分
摘要
In recent years, immense progress has been made in the development of recommendation, retrieval, and personalisation techniques. The evaluation of these systems is still based on traditional information retrieval and statistics metrics, e.g., precision, recall and/or RMSE, often not taking the use-case and situation of the actual system into consideration. However, the rapid evolution of recommender and adaptive IR systems in both their goals and their bapplication domains foster the need for new evaluation methodologies and environments. In the Workshop on Benchmarking Adaptive Retrieval and Recommender Systems, we aimed to provide a platform for discussions on novel evaluation and benchmarking approaches.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要