Learning in a Continuous-Valued Attractor Network

2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)(2018)

引用 1|浏览3
暂无评分
摘要
Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ("false memories") are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.
更多
查看译文
关键词
Hebbian learning,attractor neural networks,directional fibers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要