Implicit Preference Labels for Learning Highly Selective Personalized Rankers

ICTIR(2015)

引用 5|浏览21
暂无评分
摘要
Interaction data such as clicks and dwells provide valuable signals for learning and evaluating personalized models. However, while models of personalization typically distinguish between clicked and non-clicked results, no preference distinctions within the non-clicked results are made and all are treated as equally non-relevant. In this paper, we demonstrate that failing to enforce a prior on preferences among non-clicked results leads to learning models that often personalize with no measurable gain at the risk that the personalized ranking is worse than the non-personalized ranking. To address this, we develop an implicit preference-based framework that enables learning highly selective rankers that yield large reductions in risk such as the percentage of queries personalized. We demonstrate theoretically how our framework can be derived from a small number of basic axioms that give rise to well-founded target rankings which combine a weight on prior preferences with the implicit preferences inferred from behavioral data. Additionally, we conduct an empirical analysis to demonstrate that models learned with this approach yield comparable gains on click-based performance measures to standard methods with far fewer queries personalized. On three real-world commercial search engine logs, the method leads to substantial reductions in the number of queries re-ranked (2x - 7x fewer queries re-ranked) while maintaining 85-95% of the total gain achieved by the standard approach.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要