Few-Shot Continual Learning Based on Vector Symbolic Architectures

Frontiers in Artificial Intelligence and Applications(2023)

引用 0|浏览6
暂无评分
摘要
Vector Symbolic Architecture (VSA) is a powerful computing model that is built on a rich algebra in which all representations-from atomic to composite structures-are high-dimensional holographic distributed vectors of the same, fixed dimensionality. VSA is mainly characterized by the following intriguing properties: (i) quasi-orthogonality of a randomly chosen vector to other random vectors with very high probability, aka concentration of measure; (ii) exponential growth of the number of such quasi-orthogonal vectors with the dimensionality, which provides a sufficiently large capacity to accommodate novel concepts over time; (iii) availability of these vectors to be composed, decomposed, probed, and transformed in various ways using a set of well-defined operations. Motivated by these properties, this chapter presents a summary of recently developed methodologies on the integration of VSA with deep neural networks that enabled impactful applications to few-shot [1] and continual [2, 3] learning. Resorting to VSA-based embedding allows deep neural networks to quickly learn from few training samples by storing them in an explicit memory, where many more class categories can be continually expressed in the abstract vector space of VSA with fixed dimensions, without causing interference among the learned classes. Experiments on various image datasets show that the considered neuro-symbolic AI approach outperforms pure deep neural network baselines with remarkable accuracy, scalability, and compute/memory efficiency.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要