A biologically inspired architecture with switching units can learn to generalize across backgrounds*

biorxiv(2023)

引用 0|浏览6
暂无评分
摘要
Humans and other animals navigate different environments effortlessly, their brains rapidly and accurately generalizing across contexts. Despite recent progress in deep learning, this flexibility remains a challenge many artificial systems. Here, we show how a bio-inspired network motif can explicitly address this issue. do this using a dataset of MNIST digits of varying transparency, set on one of two backgrounds of different statistics that define two contexts: a pixel-wise noise or a more naturalistic background from the CIFAR-10 dataset. After learning digit classification when both contexts are shown sequentially, we find that both shallow and deep networks have sharply decreased performance when returning to the first background - an instance of the catastrophic forgetting phenomenon known from continual learning. To overcome this, we propose bottleneck-switching network or switching network for short. This is a bio-inspired architecture analogous a well-studied network motif in the visual cortex, with additional "switching"units that are activated in presence of a new background, assuming a priori a contextual signal to turn these units on or off. Intriguingly, only a few of these switching units are sufficient to enable the network to learn the new context without catastrophic forgetting through inhibition of redundant background features. Further, the bottleneck-switching network can generalize to novel contexts similar to contexts it has learned. Importantly, we find that - again as in the underlying biological network motif, recurrently connecting the switching units to network layers advantageous for context generalization.
更多
查看译文
关键词
Switching network,Bio-inspired,Context,Generalization,Continual learning,Domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要