Assessing SATNet's Ability to Solve the Symbol Grounding Problem
NIPS 2020(2023)
摘要
SATNet is an award-winning MAXSAT solver that can be used to infer logical
rules and integrated as a differentiable layer in a deep neural network. It had
been shown to solve Sudoku puzzles visually from examples of puzzle digit
images, and was heralded as an impressive achievement towards the longstanding
AI goal of combining pattern recognition with logical reasoning. In this paper,
we clarify SATNet's capabilities by showing that in the absence of intermediate
labels that identify individual Sudoku digit images with their logical
representations, SATNet completely fails at visual Sudoku (0% test accuracy).
More generally, the failure can be pinpointed to its inability to learn to
assign symbols to perceptual phenomena, also known as the symbol grounding
problem, which has long been thought to be a prerequisite for intelligent
agents to perform real-world logical reasoning. We propose an MNIST based test
as an easy instance of the symbol grounding problem that can serve as a sanity
check for differentiable symbolic solvers in general. Naive applications of
SATNet on this test lead to performance worse than that of models without
logical reasoning capabilities. We report on the causes of SATNet's failure and
how to prevent them.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要