Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks

NIPS(1991)

引用 46|浏览14
暂无评分
摘要
The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propa(cid:173) gation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propaga(cid:173) tion algorithm can be used in an on-line manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitiv(cid:173) ity matrix (0( fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task. In this paper w~ proposed a forward learning algorithm which is one order faster (only 0(fV3) operations for each time step) than the sensitivity matrix algorithm. The basic idea is that instead of integrating the high dimensional sensitivity dynamic equation we solve forward in time for its Green's function to avoid the redundant computations, and then update the weights whenever the error is to be corrected.
更多
查看译文
关键词
recurrent neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要