Deterministic Clustering in High Dimensional Spaces: Sketches and Approximation

2023 IEEE 64TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, FOCS(2023)

引用 1|浏览9
暂无评分
摘要
In all state-of-the-art sketching and coreset techniques for clustering, as well as in the best known fixed-parameter tractable approximation algorithms, randomness plays a key role. For the classic k-median and k-means problems, there are no known deterministic dimensionality reduction procedure or coreset construction that avoid an exponential dependency on the input dimension d, the precision parameter epsilon(-1) or k. Furthermore, there is no coreset construction that succeeds with probability 1-1/n and whose size does not depend on the number of input points, n. This has led researchers in the area to ask what is the power of randomness for clustering sketches [Feldman WIREs Data Mining Knowl. Discov'20]. Similarly, the best approximation ratio achievable deterministically without a complexity exponential in the dimension are 1 + root 2 for k-median [Cohen-Addad, Esfandiari, Mirrokni, Narayanan, STOC'22] and 6.12903 for k-means [Grandoni, Ostrovsky, Rabani, Schulman, Venkat, Inf. Process. Lett.'22]. Those are the best results, even when allowing a complexity FPT in the number of clusters k: this stands in sharp contrast with the (1+ epsilon)-approximation achievable in that case, when allowing randomization. In this paper, we provide deterministic sketches constructions for clustering, whose size bounds are close to the best-known randomized ones. We show how to compute a dimension reduction onto epsilon(-O(1)) log k dimensions in time k(O(epsilon-O(1))+ log log k) poly(nd), and how to build a coreset of size O(k(2) log(3) ke(-O(1))) in time 2e epsilon(O(1)) k log(3) k + k(O)((epsilon-O(1)+ log log k)) poly(nd). In the case where k is small, this answers an open question of [Feldman WIDM'20] and [Munteanu and Schwiegelshohn, Kunstliche Intell. '18] on whether it is possible to efficiently compute coresets deterministically. We also construct a deterministic algorithm for computing (1+ epsilon)-approximation to k-median and k-means in high dimensional Euclidean spaces in time 2(k2 log3 k/epsilon O(1)) poly(nd), close to the best randomized complexity of 2((k/e)O(1)) nd (see [Kumar, Sabharwal, Sen, JACM 10] and [Bhattacharya, Jaiswal, Kumar, TCS'18]). Furthermore, our new insights on sketches also yield a randomized coreset construction that uses uniform sampling, that immediately improves over the recent results of [Braverman et al. FOCS '22] by a factor k.
更多
查看译文
关键词
Clustering,Coreset,Sketch,Approximation algorithm,FPT,Deterministic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要