A provably stable neural network Turing Machine with finite precision and time

INFORMATION SCIENCES(2024)

引用 0|浏览16
暂无评分
摘要
We introduce a neural stack architecture with a differentiable parameterized stack operator approximating stack push and pop operations. We prove the stability of this stack architecture for arbitrarily many stack operations, showing that the state of the neural stack still closely resembles the state of a discrete stack. Using the neural stack with a recurrent neural network, we devise a neural network Pushdown Automaton (nnPDA). A new theoretical bound shows that an nnPDA can recognize any PDA using only finite precision state neurons in finite time. By using two neural stacks to construct a neural tape together with a recurrent neural network, we define a neural network Turing Machine (nnTM). Just like the neural stack, we show these architectures are also stable. Furthermore, we show the nnTM is Turing complete. It requires finite precision state neurons with an arbitrary number of stack neurons to recognize any TM in finite time, thus providing a new and much stronger computational upper bound for neural networks that are Turing complete.
更多
查看译文
关键词
Turing completeness,Universal Turing Machine,Tensor RNNs,Neural tape,Neural stack,Stability,Finite precision,Formal language theory,Automata,Chomsky hierarchy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要