Structure learning in markov logic networks

Structure learning in markov logic networks(2010)

引用 25|浏览28
暂无评分
摘要
Markov logic networks (MLNs) [86, 24] are a powerful representation combining first-order logic and probability. An MLN attaches weights to first-order formulas and views these as templates for features of Markov networks. Learning MLN structure consists of learning both formulas and their weights. This is a challenging problem because of its super-exponential search space of formulas, and the need to repeatedly learn the weights of formulas in order to evaluate them, a process that requires computationally expensive statistical inference. This thesis presents a series of algorithms that efficiently and accurately learn MLN structure. We begin by combining ideas from inductive logic programming (ILP) and feature induction in Markov networks in our MSL system. Previous approaches learn MLN structure in a disjoint manner by first learning formulas using off-the-shelf ILP systems and then learning formula weights that optimize some measure of the data's likelihood. We present an integrated approach that learns both formulas and weights to jointly optimize likelihood. Next we present the MRC system that learns latent MLN structure by discovering unary predicates in the form of clusters. MRC forms multiple clusterings of constants and relations, with each cluster corresponding to an invented predicate. We empirically show that by creating multiple clusterings, MRC outperforms previous systems. Then we apply a variant of MRC to the long-standing AI problem of extracting knowledge from text. Our system extracts simple semantic networks in an unsupervised, domain-independent manner from Web text, and introduces several techniques to scale up to the Web. After that, we incorporate the discovery of latent unary predicates into the learning of MLN clauses in the LHL system. LHL first compresses the data into a compact form by clustering the constants into high-level concepts, and then searches for clauses in the compact representation. We empirically show that LHL is more efficient and finds better formulas than previous systems. Finally, we present the LSM system that makes use of random walks to find repeated patterns in data. By restricting its search to within such patterns, LSM is able to accurately and efficiently find good formulas, improving efficiency by 2-5 orders of magnitude compared to previous systems.
更多
查看译文
关键词
MRC system,MLN clause,markov logic network,latent MLN structure,LSM system,multiple clusterings,LHL system,MLN structure,Learning MLN structure,Markov network,previous system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要