COLTR: Semi-Supervised Learning to Rank With Co-Training and Over-Parameterization for Web Search

IEEE Transactions on Knowledge and Data Engineering(2023)

引用 0|浏览41
暂无评分
摘要
While learning to rank (LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs with ranking scores to cover search queries of various popularity, and 2) ill-trained models based on a limited number of training samples with poor generalization performance. To improve the performance of LTR models, tremendous efforts have been done from above two aspects, such as enlarging training sets with pseudo-labels of ranking scores by self-training, or refining the features used for LTR through feature extraction and dimension reduction. Though LTR performance has been marginally increased, we still believe these methods could be further improved in the newly-fashioned “interpolating regime”. Specifically, instead of lowering the number of features used for LTR models, our work proposes to transform original data with random Fourier feature, so as to over-parameterize the downstream LTR models (e.g., GBRank or LightGBM) with features in ultra-high dimensionality and achieve superb generalization performance. Furthermore, rather than self-training with pseudo-labels produced by the same LTR model in a “self-tuned” fashion, the proposed method incorporates the diversity of prediction results between the listwise and pointwise LTR models while co-training both models with a cyclic labeling-prediction pipeline in a “ping-pong” manner. We deploy the proposed Co-trained and Over-parameterized LTR system COLTR at Baidu search and evaluate COLTR with a large number of baseline methods. The results show that COLTR could achieve $\Delta NDCG_{4}$ = 3.64% $\sim$ 4.92%, compared to baselines, under various ratios of labeled samples. We also conduct a 7-day A/B Test using the realistic web traffics of Baidu Search, where we can still observe significant performance improvement around $\Delta NDCG_{4}$ = 0.17% $\sim$ 0.92% in real-world applications. COLTR performs consistently both in online and offline experiments.
更多
查看译文
关键词
Learning to rank,semi-supervised learning,over-parameterization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要