Scalable Dyadic Kernel Machines

msra

引用 23|浏览11
暂无评分
摘要
In the dyadic data prediction (DDP) problem, we observe labeled pairs (dyads) drawn from a finite Cartesian product M U and form predictions for the labels of unseen dyads. This results in a sparse, non-linear prediction problem, for which kernel machines, like the Support Vector Machine, are well suited. However, the release of the 100 million dyad Netflix dataset has brought the issue of DDP scalability to the forefront. Most kernel-machine solvers scale superlinearly in the number of data examples, making large-scale DDP infeasible. In this work, we explore techniques for enhancing the scalability of kernel machines for DDP. En route, we develop two natural reformulations of the kernel machine framework, designed to reflect and exploit the underlying structure of the DDP problem.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要