Parallel rule-based selective sampling and on-demand learning to rank: Parallel Rule-based Selective Sampling and On-Demand Learning to Rank

CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE(2019)

引用 1|浏览49
暂无评分
摘要
Learning to rank (L2R) works by constructing a ranking model from training data so that, given a new query, the model is able to generate an effective rank of the objects for the query. Almost all work in L2R focus on ranking accuracy leaving performance and scalability overlooked. However, performance is a critical factor, especially when dealing with on-demand queries. In this scenario, Learning to Rank using association rules has been shown to be extremely effective but only at a high computational cost. In this work, we show how to exploit parallelism on rule-based systems to: i) drastically reduce L2R training datasets using selective sampling and ii) to generate query customized ranking models on the fly. We present parallel algorithms and GPU implementations for these two tasks showing that dataset reduction takes only a few seconds with speedups up to 148x over a serial baseline, and that queries can be processed in only a few milliseconds with speedups of 1000x over a serial baseline and 29x over a parallel baseline for the best case. We also extend the implementations to work with multiple GPUs, further increasing the speedup over the baselines and showing the scalability of our proposed algorithms.
更多
查看译文
关键词
association rules,learning to rank,parallelism,selective sampling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要