Using Weight Mirrors to Improve Feedback Alignment

arXiv: Learning(2019)

引用 23|浏览56
暂无评分
摘要
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, in which forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe a neural circuit called a weight mirror, which lets the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring, and with a Hebbian learning rule. Tested on the ImageNet visual-recognition task, networks with weight mirrors outperform both plain feedback alignment and the newer sign-symmetry method, and nearly match the error-backpropagation algorithm, which uses weight transport.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要