Bayesian clustering of high-dimensional data

arXiv (Cornell University)(2020)

引用 0|浏览0
暂无评分
摘要
In many applications, it is of interest to cluster subjects based on very high-dimensional data. Although Bayesian discrete mixture models are often successful at model-based clustering, we demonstrate pitfalls in high-dimensional settings. The first key problem is a tendency for posterior sampling algorithms based on Markov chain Monte Carlo to produce a very large number of clusters that slowly decreases as sampling proceeds, indicating serious mixing problems. The second key problem is that the true posterior also has aberrant behavior but potentially in the opposite direction. In particular, we show that, for diverging dimension and fixed sample size, the true posterior either assigns each observation to a different cluster or all observations to the same cluster, depending on the kernels and prior specification. We propose a general strategy for solving these problems by basing clustering on a discrete mixture model for a low-dimensional latent variable. We refer to this class of methods as LAtent Mixtures for Bayesian (Lamb) clustering. Theoretical support is provided, and we illustrate substantial gains relative to clustering on the observed data level in simulation studies. The methods are motivated by an application to clustering of single cell RNAseq data, with the clusters corresponding to different cell types.
更多
查看译文
关键词
bayesian clustering,data,high-dimensional
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要