Improving on-demand learning to rank through parallelism

WISE(2012)

引用 7|浏览0
暂无评分
摘要
Traditional Learning to Rank (L2R) is usually conducted in a batch mode in which a single ranking function is learned to order results for future queries. This approach is not flexible since future queries may differ considerably from those present in the training set and, consequently, the learned function may not work properly. Ideally, a distinct learning function should be learned on demand for each query. Nevertheless, on-demand L2R may significantly degrade the query processing time, as the ranking function has to be learned on-the-fly before it can be applied. In this paper we present a parallel implementation of an on-demand L2R technique that reduces drastically the response time of previous serial implementation. Our implementation makes use of thousands of threads of a GPU to learn a ranking function for each query, and takes advantage of a reduced training set obtained through active learning. Experiments with the LETOR benchmark show that our proposed approach achieves a mean speedup of 127x in query processing time when compared to the sequential version, while producing very competitive ranking effectiveness.
更多
查看译文
关键词
query processing time,ranking function,single ranking function,previous serial implementation,parallel implementation,competitive ranking effectiveness,on-demand l2r,improving on-demand,distinct learning function,l2r technique,future query,learning to rank
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要