Fast optimize-and-sample method for differentiable Galerkin approximations of multi-layered Gaussian process priors

2022 25th International Conference on Information Fusion (FUSION)(2022)

引用 0|浏览1
暂无评分
摘要
Multi-layered Gaussian process (field) priors are non-Gaussian priors, which offer a capability to handle Bayesian inference on both smooth and discontinuous functions. Previ-ously, performing Bayesian inference using these priors required the construction of a Markov chain Monte Carlo sampler. To converge to the stationary distribution, this sampling technique is computationally inefficient and hence the utility of the approach has only been demonstrated for small canonical test problems. Furthermore, in numerous Bayesian inference applications, such as Bayesian inverse problems, the uncertainty quantification of the hyper-prior layers is of less interest, since the main concern is to quantify the randomness of the process/field of interest. In this article, we propose an alternative approach, where we optimize the hyper-prior layers, while inference is performed only for the lowest layer. Specifically, we use the Galerkin approximation with automatic differentiation to accelerate optimization. We validate the proposed approach against several existing non-stationary Gaussian process methods and demonstrate that it can significantly decrease the execution time while maintaining comparable accuracy. We also apply the method to an X-ray tomography inverse problem. Due to its improved performance and robustness, this new approach opens up the possibility for applying the multi-layer Gaussian field priors to more complex problems.
更多
查看译文
关键词
Bayesian learning,Gaussian Processes,Markov chain Monte Carlo,inverse problems,Galerkin approximations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要