Large-scale Nyström kernel matrix approximation using randomized SVD.

IEEE Trans. Neural Netw. Learning Syst.(2015)

引用 129|浏览90
暂无评分
摘要
The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, to ensure an accurate approximation, a sufficient number of columns have to be sampled. On very large data sets, the singular value decomposition (SVD) step on the resultant data submatrix can quickly dominate the computations and become prohibitive. In this paper, we propose an accurate and scalable Nyström scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix using the recent randomized low-rank matrix approximation algorithms. Theoretical analysis shows that the proposed algorithm is as accurate as the standard Nyström method that directly performs a large SVD on the inner submatrix. On the other hand, its time complexity is only as low as performing a small SVD. Encouraging results are obtained on a number of large-scale data sets for low-rank approximation. Moreover, as the most computational expensive steps can be easily distributed and there is minimal data transfer among the processors, significant speedup can be further obtained with the use of multiprocessor and multi-GPU systems.
更多
查看译文
关键词
large-scale learning,low-rank matrix approximation,processors,low rank approximation,nystr??m method,randomized svd.,operating system kernels,graphics processing units,matrix algebra,inner submatrix,multigpu systems,nystrom scheme,randomized low rank matrix approximation algorithms,nyström method,resultant data submatrix,large kernel matrices,randomized svd,distributed computing,eigenvalue decomposition,nystrom kernel matrix approximation,multiprocessor,standard nystrom method,graphics processor,eigenvalues and eigenfunctions,singular value decomposition,minimal data transfer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要