Embracing the black box: Heading towards foundation models for causal discovery from time series data
CoRR(2024)
摘要
Causal discovery from time series data encompasses many existing solutions,
including those based on deep learning techniques. However, these methods
typically do not endorse one of the most prevalent paradigms in deep learning:
End-to-end learning. To address this gap, we explore what we call Causal
Pretraining. A methodology that aims to learn a direct mapping from
multivariate time series to the underlying causal graphs in a supervised
manner. Our empirical findings suggest that causal discovery in a supervised
manner is possible, assuming that the training and test time series samples
share most of their dynamics. More importantly, we found evidence that the
performance of Causal Pretraining can increase with data and model size, even
if the additional data do not share the same dynamics. Further, we provide
examples where causal discovery for real-world data with causally pretrained
neural networks is possible within limits. We argue that this hints at the
possibility of a foundation model for causal discovery.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要