Learning-To-Rank With Partitioned Preference: Fast Estimation For The Plackett-Luce Model

24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)(2021)

引用 8|浏览260
暂无评分
摘要
We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown. Given N items with M partitions, calculating the likelihood of data with partitioned preference under the PL model has a time complexity of O(N + S!), where S is the maximum size of the top M - 1 partitions. This computational challenge restrains most existing PL-based listwise LTR methods to a special case of partitioned preference, top-K ranking, where the exact order of the top K items is known. In this paper, we exploit a random utility model formulation of the PL model, and propose an efficient numerical integration approach for calculating the likelihood and its gradients with a time complexity O (N + S-3). We demonstrate that the proposed method outperforms well-known LTR baselines and remains scalable through both simulation experiments and applications to real-world eXtreme Multi-Label classification tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要