A General Framework for Interpretable Neural Learning based on Local Information-Theoretic Goal Functions
arxiv(2023)
摘要
Despite the impressive performance of biological and artificial networks, an
intuitive understanding of how their local learning dynamics contribute to
network-level task solutions remains a challenge to this date. Efforts to bring
learning to a more local scale indeed lead to valuable insights, however, a
general constructive approach to describe local learning goals that is both
interpretable and adaptable across diverse tasks is still missing. We have
previously formulated a local information processing goal that is highly
adaptable and interpretable for a model neuron with compartmental structure.
Building on recent advances in Partial Information Decomposition (PID), we here
derive a corresponding parametric local learning rule, which allows us to
introduce 'infomorphic' neural networks. We demonstrate the versatility of
these networks to perform tasks from supervised, unsupervised and memory
learning. By leveraging the interpretable nature of the PID framework,
infomorphic networks represent a valuable tool to advance our understanding of
the intricate structure of local learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要