From lazy to rich to exclusive task representations in neural networks and neural codes

Current Opinion in Neurobiology(2023)

引用 0|浏览9
暂无评分
摘要
Neural circuits—both in the brain and in “artificial” neural network models—learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as “lazy” or “rich” depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including “hidden layers”). We further elucidate rich networks through the lens of compression and “neural collapse”, ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.
更多
查看译文
关键词
exclusive task representations,neural codes,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要