Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness.

COLT(2019)

引用 46|浏览41
暂无评分
摘要
In the past decade, sparse principal component analysis has emerged as an archetypal problem for illustrating statistical-computational tradeoffs. This trend has largely been driven by a line of research aiming to characterize the average-case complexity of sparse PCA through reductions from the planted clique (PC) conjecture - which conjectures that there is no polynomial-time algorithm to detect a planted clique of size $K = o(N^{1/2})$ $mathcal{G}(N, frac{1}{2})$. All previous reductions to sparse PCA either fail to show tight computational lower bounds matching existing algorithms or show lower bounds for formulations of sparse PCA other than its canonical generative model, the spiked covariance model. Also, these lower bounds all quickly degrade with the exponent the PC conjecture. Specifically, when only given the PC conjecture up to $K = o(N^alpha)$ where $alpha u003c 1/2$, there is no sparsity level $k$ at which these lower bounds remain tight. If $alpha le 1/3$ these reductions fail to even show the existence of a statistical-computational tradeoff at any sparsity $k$. We give a reduction from PC that yields the first full characterization of the computational barrier the spiked covariance model, providing tight lower bounds at all sparsities $k$. We also show the surprising result that weaker forms of the PC conjecture up to clique size $K = o(N^alpha)$ for any given $alpha in (0, 1/2]$ imply tight computational lower bounds for sparse PCA at sparsities $k = o(n^{alpha/3})$. This shows that even a mild improvement the signal strength needed by the best known polynomial-time sparse PCA algorithms would imply that the hardness threshold for PC is subpolynomial. This is the first instance of a suboptimal hardness assumption implying optimal lower bounds for another problem unsupervised learning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要