Universal Algorithms for Clustering Problems.

ACM Trans. Algorithms(2023)

引用 1|浏览5
暂无评分
摘要
This article presents universal algorithms for clustering problems, including the widely studied k -median, k -means, and k -center objectives. The input is a metric space containing all potential client locations. The algorithm must select k cluster centers such that they are a good solution for any subset of clients that actually realize. Specifically, we aim for low regret , defined as the maximum over all subsets of the difference between the cost of the algorithm’s solution and that of an optimal solution. A universal algorithm’s solution Sol for a clustering problem is said to be an α , β-approximation if for all subsets of clients C ′ , it satisfies sol ( C ′ ) ≤ α ċ opt ( C ′) + β ċ mr , where opt ( C ′ is the cost of the optimal solution for clients ( C ′) and mr is the minimum regret achievable by any solution. Our main results are universal algorithms for the standard clustering objectives of k -median, k -means, and k -center that achieve ( O (1), O (1))-approximations. These results are obtained via a novel framework for universal algorithms using linear programming (LP) relaxations. These results generalize to other ℓ p -objectives and the setting where some subset of the clients are fixed . We also give hardness results showing that (α, β)-approximation is NP-hard if α or β is at most a certain constant, even for the widely studied special case of Euclidean metric spaces. This shows that in some sense, ( O (1), O (1))-approximation is the strongest type of guarantee obtainable for universal clustering.
更多
查看译文
关键词
Universal algorithms,clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要