Explanatory models in neuroscience, Part 2: Functional intelligibility and the contravariance principle

COGNITIVE SYSTEMS RESEARCH(2024)

引用 0|浏览3
暂无评分
摘要
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how computational models explain. In the particular case of neural network models, concerns have been raised about their intelligibility, and how these models relate (if at all) to what is found in the brain. We claim that what makes a system intelligible is an understanding of the dependencies between its behavior and the factors that are responsible for that behavior. In biology, many of these dependencies are naturally "top-down", as ethological imperatives interact with evolutionary and developmental constraints under natural selection to produce systems with capabilities and behaviors appropriate to their evolutionary needs. We describe how the optimization techniques used to construct neural network models capture some key aspects of these dependencies, and thus help explain why brain systems are as they are - because when a challenging ecologically-relevant goal is shared by a neural network and the brain, it places constraints on the possible mechanisms exhibited in both kinds of systems. The presence and strength of these constraints explain why some outcomes are more likely than others. By combining two familiar modes of explanation - one based on bottom-up mechanistic description (whose relation to neural network models we address in a companion paper) and the other based on top-down constraints, these models have the potential to illuminate brain function.
更多
查看译文
关键词
Evolution,Contravariance,Intelligibility,Function,Optimization,No-miracles,Instrumentalism,Realism,Philosophy,Constraints,Evolutionary landscape,Models,Explanation,Evo-devo,Development,Learning,Deep learning,Abstraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要