Towards a Learning Theory of Cause-Effect Inference

International Conference on Machine Learning(2015)

引用 197|浏览109
暂无评分
摘要
We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si, li)}in=1, where each Si is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether \"Xi→Yi\" or \"Xi←Yi\". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要