Active Regression via Linear-Sample Sparsification

conference on learning theory(2017)

引用 36|浏览52
暂无评分
摘要
We present an approach that improves the sample complexity for a variety of curve fitting problems, including active learning for linear regression, polynomial regression, and continuous sparse Fourier transforms. In the active linear regression problem, one would like to estimate the least squares solution β^* minimizing Xβ - y_2 given the entire unlabeled dataset X ∈ℝ^n × d but only observing a small number of labels y_i. We show that O(d) labels suffice to find a constant factor approximation β̃: 𝔼[Xβ̃ - y_2^2] ≤ 2 𝔼[X β^* - y_2^2]. This improves on the best previous result of O(d log d) from leverage score sampling. We also present results for the inductive setting, showing when β̃ will generalize to fresh samples; these apply to continuous settings such as polynomial regression. Finally, we show how the techniques yield improved results for the non-linear sparse Fourier transform setting.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要