Efficient learning and inference in rich statistical representations
Efficient learning and inference in rich statistical representations(2010)
摘要
Rich statistical representations such as Markov logic networks are essential for solving hard problems in artificial intelligence and machine learning. However, the increased complexity of learning and inference often limits their effectiveness in practice. In this dissertation, we make several contributions towards richer representations and the algorithms to support them. We introduce recursive Markov logic, a "deep" generalization of Markov logic that introduces uncertainty into every level of a first-order knowledge base. We also develop improved weight learning algorithms for Markov logic, leading to more accurate models in less time. Finally, we use arithmetic circuits to address the problem of inference in graphical models in two ways. First, we present an algorithm that learns Bayesian networks with fast inference by using inference complexity as a learning bias. Second, we show how to use arithmetic circuits in an extremely flexible form of variational inference.
更多查看译文
关键词
arithmetic circuit,variational inference,Bayesian network,recursive Markov logic,rich statistical representation,machine learning,inference complexity,Markov logic network,fast inference,Markov logic,efficient learning,increased complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络