Deep FPF: Gain function approximation in high-dimensional setting

CDC(2020)

引用 0|浏览2
暂无评分
摘要
In this paper, we present a novel approach to approximate the gain function of the feedback particle filter (FPF). The exact gain function is the solution of a Poisson equation involving a probability-weighted Laplacian. The numerical problem is to approximate the exact gain function using only finitely many particles sampled from the probability distribution. Inspired by the recent success of the deep learning methods, we represent the gain function as a gradient of the output of a neural network. Thereupon considering a certain variational formulation of the Poisson equation, an optimization problem is posed for learning the weights of the neural network. A stochastic gradient algorithm is described for this purpose. The proposed approach has two significant properties/advantages: (i) The stochastic optimization algorithm allows one to process, in parallel, only a batch of samples (particles) ensuring good scaling properties with the number of particles; (ii) The remarkable representation power of neural networks means that the algorithm is potentially applicable and useful to solve high-dimensional problems. We numerically establish these two properties and provide extensive comparison to the existing approaches.
更多
查看译文
关键词
stochastic gradient algorithm,stochastic optimization algorithm,neural network,high-dimensional problems,deep FPF,gain function approximation,high-dimensional setting,feedback particle filter,exact gain function,Poisson equation,probability-weighted Laplacian,numerical problem,probability distribution,deep learning methods,optimization problem,variational formulation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要