Spectral Clustering Oracles in Sublinear Time

SODA(2021)

引用 13|浏览83
暂无评分
摘要
Given a graph $G$ that can be partitioned into $k$ disjoint expanders with outer conductance upper bounded by $\epsilon\ll 1$, can we efficiently construct a small space data structure that allows quickly classifying vertices of $G$ according to the expander (cluster) they belong to? Formally, we would like an efficient local computation algorithm that misclassifies at most an $O(\epsilon)$ fraction of vertices in every expander. We refer to such a data structure as a \textit{spectral clustering oracle}. Our main result is a spectral clustering oracle with query time $O^*(n^{1/2+O(\epsilon)})$ and preprocessing time $2^{O(\frac{1}{\epsilon} k^4 \log^2(k))} n^{1/2+O(\epsilon)}$ that provides misclassification error $O(\epsilon \log k)$ per cluster for any $\epsilon \ll 1/\log k$. More generally, query time can be reduced at the expense of increasing the preprocessing time appropriately (as long as the product is about $n^{1+O(\epsilon)}$) -- this in particular gives a nearly linear time spectral clustering primitive. The main technical contribution is a sublinear time oracle that provides dot product access to the spectral embedding of $G$ by estimating distributions of short random walks from vertices in $G$. The distributions themselves provide a poor approximation to the spectral embedding, but we show that an appropriate linear transformation can be used to achieve high precision dot product access. We then show that dot product access to the spectral embedding is sufficient to design a clustering oracle. At a high level our approach amounts to hyperplane partitioning in the spectral embedding of $G$, but crucially operates on a nested sequence of carefully defined subspaces in the spectral embedding to achieve per cluster recovery guarantees.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要