Improved Adaptive Importance Sampling Based on Variational Inference.

European Signal Processing Conference(2018)

引用 3|浏览11
暂无评分
摘要
In Monte Carlo-based Bayesian inference, it is important to generate samples from a target distribution, which are then used, e.g., to compute expectations with respect to the target distribution. Quite often, the target distribution is the posterior of parameters of interest, and drawing samples from it can be exceedingly difficult. Monte Carlo-based methods, like adaptive importance sampling (AIS), is built on the importance sampling principle to approximate a target distribution using a set of samples and their corresponding weights. Variational inference (VI) attempts to approximate the posterior by minimizing the Kullback-Leibler divergence (KLD) between the posterior and a set of simpler parametric distributions. While AIS often performs well, it struggles to approximate multimodal distributions and suffers when applied to high dimensional problems. By contrast, VI is fast and scales well with the dimension, but typically underestimates the variance of the target distribution. In this paper, we combine both methods to overcome their individual drawbacks and create an efficient and robust novel technique for drawing better samples from a target distribution. Our contribution is two-fold. First, we show how to do a smart initialization of AIS using VI. Second, we propose a method for adapting the parameters of the proposal distributions of the AIS, where the adaptation depends on the performance of the VI step. Computer simulations reveal that the new method improves the performance of the individual methods and shows promise to be applied to challenging scenarios.
更多
查看译文
关键词
Adaptive importance sampling,Markov chain Monte Carlo,variational inference,Bayesian inference
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要