A fast algorithm for computing distance correlation

Computational Statistics & Data Analysis(2018)

引用 25|浏览14
暂无评分
摘要
Classical dependence measures such as Pearson correlation, Spearman's $\rho$, and Kendall's $\tau$ can detect only monotonic or linear dependence. To overcome these limitations, Szekely et al.(2007) proposed distance covariance as a weighted $L_2$ distance between the joint characteristic function and the product of marginal distributions. The distance covariance is $0$ if and only if two random vectors ${X}$ and ${Y}$ are independent. This measure has the power to detect the presence of a dependence structure when the sample size is large enough. They further showed that the sample distance covariance can be calculated simply from modified Euclidean distances, which typically requires $\mathcal{O}(n^2)$ cost. The quadratic computing time greatly limits the application of distance covariance to large data. In this paper, we present a simple exact $\mathcal{O}(n\log(n))$ algorithm to calculate the sample distance covariance between two univariate random variables. The proposed method essentially consists of two sorting steps, so it is easy to implement. Empirical results show that the proposed algorithm is significantly faster than state-of-the-art methods. The algorithm's speed will enable researchers to explore complicated dependence structures in large datasets.
更多
查看译文
关键词
Distance correlation,Dependency measure,Fast algorithm,Merge sort
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要