Sparse Linear Regression and Lattice Problems
CoRR(2024)
摘要
Sparse linear regression (SLR) is a well-studied problem in statistics where
one is given a design matrix X∈ℝ^m× n and a response vector
y=Xθ^*+w for a k-sparse vector θ^* (that is,
θ^*_0≤ k) and small, arbitrary noise w, and the goal is to find
a k-sparse θ∈ℝ^n that minimizes the mean
squared prediction error 1/mXθ-Xθ^*^2_2.
While ℓ_1-relaxation methods such as basis pursuit, Lasso, and the Dantzig
selector solve SLR when the design matrix is well-conditioned, no general
algorithm is known, nor is there any formal evidence of hardness in an
average-case setting with respect to all efficient algorithms.
We give evidence of average-case hardness of SLR w.r.t. all efficient
algorithms assuming the worst-case hardness of lattice problems. Specifically,
we give an instance-by-instance reduction from a variant of the bounded
distance decoding (BDD) problem on lattices to SLR, where the condition number
of the lattice basis that defines the BDD instance is directly related to the
restricted eigenvalue condition of the design matrix, which characterizes some
of the classical statistical-computational gaps for sparse linear regression.
Also, by appealing to worst-case to average-case reductions from the world of
lattices, this shows hardness for a distribution of SLR instances; while the
design matrices are ill-conditioned, the resulting SLR instances are in the
identifiable regime.
Furthermore, for well-conditioned (essentially) isotropic Gaussian design
matrices, where Lasso is known to behave well in the identifiable regime, we
show hardness of outputting any good solution in the unidentifiable regime
where there are many solutions, assuming the worst-case hardness of standard
and well-studied lattice problems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要