Backpropagation through time learning for recurrence-aware long-term cognitive networks

Knowledge-Based Systems(2024)

引用 0|浏览0
暂无评分
摘要
Fuzzy Cognitive Mapping (FCM) and the extensive family of models derived from it have firmly established their strong position in the landscape of machine learning algorithms. Specifically designed for pattern classification and multi-output regression, the recently introduced Recurrence-aware Long-term Cognitive Network (r-LTCN) model is one of these FCM-inspired extensions. On the one hand, this recurrent neural network connects all temporal states generated during the reasoning process with the decision-making layer. On the other hand, it uses a quasi-nonlinear reasoning rule devoted to avoiding convergence issues caused by unique fixed points, which typically emerge in other FCM models. In the original paper, the authors employed a combination of unsupervised and supervised learning to compute the r-LTCNs’ learnable parameters. Despite r-LTCNs’ astounding performance for a wide variety of pattern classification problems, the literature reports no attempt to train these recurrent neural systems in a fully supervised manner nor provide insights into their performance in other machine learning settings. This paper brings forward a modified Backpropagation Through Time learning (BPTT) algorithm devoted to training r-LTCN models used for multi-output regressions tasks rather than pattern classification. The proposed BPPT includes a simple yet effective mechanism to deal with the vanishing gradient within the recurrent layer that operates as a closed system while being tailored to the quasi-nonlinear reasoning mechanism. Empirical evaluation of the proposed BPTT algorithm using 20 multi-output regression problems reveals that it produces lower prediction errors compared with other state-of-the-art learning approaches.
更多
查看译文
关键词
Fuzzy cognitive maps,Recurrence-aware long-term cognitive network,Backpropagation through time
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要