Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution.

Neural Networks, IEEE Transactions(1995)

引用 103|浏览0
暂无评分
摘要
It is often difficult to predict the optimal neural network size for a particular application. Constructive or destructive methods that add or subtract neurons, layers, connections, etc. might offer a solution to this problem. We prove that one method, recurrent cascade correlation, due to its topology, has fundamental limitations in representation and thus in its learning capabilities. It cannot represent with monotone (i.e., sigmoid) and hard-threshold activation functions certain finite state automata. We give a “preliminary” approach on how to get around these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully-recurrent structure. We illustrate this approach by simulations which learn many examples of regular grammars that the recurrent cascade correlation method is unable to learn
更多
查看译文
关键词
optimisation,sigmoid activation functions,hard-threshold activation functions,learning (artificial intelligence),monotone activation functions,recurrent neural networks,regular grammars,recurrent cascade correlation,destructive methods,recurrent neural nets,optimal neural network size,constructive learning,correlation methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要