Reconciling Shared versus Context-Specific Information in a Neural Network Model of Latent Causes
CoRR(2023)
摘要
It has been proposed that, when processing a stream of events, humans divide
their experiences in terms of inferred latent causes (LCs) to support
context-dependent learning. However, when shared structure is present across
contexts, it is still unclear how the "splitting" of LCs and learning of shared
structure can be simultaneously achieved. Here, we present the Latent Cause
Network (LCNet), a neural network model of LC inference. Through learning, it
naturally stores structure that is shared across tasks in the network weights.
Additionally, it represents context-specific structure using a context module,
controlled by a Bayesian nonparametric inference algorithm, which assigns a
unique context vector for each inferred LC. Across three simulations, we found
that LCNet could 1) extract shared structure across LCs in a function learning
task while avoiding catastrophic interference, 2) capture human data on
curriculum effects in schema learning, and 3) infer the underlying event
structure when processing naturalistic videos of daily events. Overall, these
results demonstrate a computationally feasible approach to reconciling shared
structure and context-specific structure in a model of LCs that is scalable
from laboratory experiment settings to naturalistic settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要