OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings

foundations of computer science(2013)

引用 404|浏览95
暂无评分
摘要
An oblivious subspace embedding (OSE) given some parameters ε, d is a distribution D over matrices π ε &Roph;mxn such that for any linear subspace W &Roph;n with dim(W) = d, Pr_π∼ D for all x in W|π x|_2 in (1pm ε)|x|_2) 2/3. We show that a certain class of distributions, Oblivious Sparse Norm-Approximating Projections (OSNAPs), provides OSE's with m = O(d1+γ/ε2), and where every matrix π in the support of the OSE has only s = O_γ(1/ε) non-zero entries per column, for γ0 any desired constant. Plugging OSNAPs into known algorithms for approximate least squares regression, lp regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: we show that for any fixed U ε&Roph;nxd with orthonormal columns and random sparse π, all singular values of π U lie in [1-ε, 1+ε] with good probability. This can be seen as a generalization of the sparse Johnson-Linden Strauss lemma, which was concerned with d=1. Our methods also recover a slightly sharper version of a main result of [Clarkson-Woodruff, STOC 2013], with a much simpler proof. That is, we show that OSNAPs give an OSE with m = O(d2/ε2), s = 1.
更多
查看译文
关键词
sparse johnson-linden strauss lemma,main result,lp regression,oblivious subspace embedding,linear subspace,fixed u,random matrix theory,distribution d,sparser subspace embeddings,u lie,random sparse,faster numerical linear algebra,linear algebra,type theory,regression analysis,computational complexity,probability,approximation theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要