Variational posterior approximation using stochastic gradient ascent with adaptive stepsize

Pattern Recognition(2021)

引用 2|浏览13
暂无评分
摘要
•Variational inference, most notably the stochastic variational inference rely on the closed formed coordinate ascent algorithm. A challenge is how to scale up the learning. We proposed using Stochastic Gradient Ascent as the scalable learner for the variational inference of the Variational Bayes Dirichlet Process Mixture.•In order to achieve speed while maintaining performance for variational inference of DPM using SGA, we adopted two stochastic optimization techniques for our SGA, for comparison. Namely, the natural gradient ascent and the momentum method.•We show that our new stochastic gradient ascent approach to variational inference is compatible with deep ConvNet features when applied to large scale datasets such as Caltech256 and SUN397. Lastly, we justify our speed and performance claims when compared to closed form coordinate ascent learning on these datasets.
更多
查看译文
关键词
Dirichlet process mixture,Stochastic gradient ascent,Fisher information,Scalable algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要