Artificial Neural Networks that Learn to Satisfy Logic Constraints.

CoRR(2017)

引用 23|浏览3
暂无评分
摘要
Logic-based problems such as planning, theorem proving, or puzzles, typically involve combinatoric search and structured knowledge representation. Artificial neural networks are very successful statistical learners, however, for many years, they have been criticized for their weaknesses in representing and in processing complex structured knowledge which is crucial for combinatoric search and symbol manipulation. Two neural architectures are presented, which can encode structured relational knowledge in neural activation, and store bounded First Order Logic constraints in connection weights. Both architectures learn to search for a solution that satisfies the constraints. Learning is done by unsupervised practicing on problem instances from the same domain, in a way that improves the network-solving speed. No teacher exists to provide answers for the problem instances of the training and test sets. However, the domain constraints are provided as prior knowledge to a loss function that measures the degree of constraint violations. Iterations of activation calculation and learning are executed until a solution that maximally satisfies the constraints emerges on the output units. As a test case, block-world planning problems are used to train networks that learn to plan in that domain, but the techniques proposed could be used more generally as in integrating prior symbolic knowledge with statistical learning
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要