Metabayes: Bayesian Meta-Interpretative Learning Using Higher-Order Stochastic Refinement

INDUCTIVE LOGIC PROGRAMMING: 23RD INTERNATIONAL CONFERENCE(2014)

引用 22|浏览12
暂无评分
摘要
Recent papers have demonstrated that both predicate invention and the learning of recursion can be efficiently implemented by way of abduction with respect to a meta-interpreter. This paper shows how Meta-Interpretive Learning (MIL) can be extended to implement a Bayesian posterior distribution over the hypothesis space by treating the meta-interpreter as a Stochastic Logic Program. The resulting MetaBayes system uses stochastic refinement to randomly sample consistent hypotheses which are used to approximate Bayes' Prediction. Most approaches to Statistical Relational Learning involve separate phases of model estimation and parameter estimation. We show how a variant of the MetaBayes approach can be used to carry out simultaneous model and parameter estimation for a new representation we refer to as a Super-imposed Logic Program (SiLPs). The implementation of this approach is referred to as MetaBayes(SiLP). SiLPs are a particular form of ProbLog program, and so the parameters can also be estimated using the more traditional EM approach employed by ProbLog. This second approach is implemented in a new system called MilProbLog. Experiments are conducted on learning grammars, family relations and a natural language domain. These demonstrate that MetaBayes outperforms MetaBayes(MAP) in terms of predictive accuracy and also outperforms both MilProbLog and MetaBayes(SiLP) on log likelihood measures. However, MetaBayes incurs substantially higher running times than MetaBayes(MAP). On the other hand, MetaBayes and MetaBayes(SiLP) have similar running times while both have much shorter running times than MilProbLog.
更多
查看译文
关键词
Stochastic Refinement, Meta-interpretive Learning (MIL), Stochastic Logic Programs (SLP), ProbLog Program, Hypothesis Space
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要