Explanatory models in neuroscience, Part 1: Taking mechanistic abstraction seriously

Cognitive Systems Research(2024)

引用 0|浏览4
暂无评分
摘要
Despite the recent success of neural network models in mimicking animal performance on various tasks, critics worry that these models fail to illuminate brain function. We take it that a central approach to explanation in systems neuroscience is that of mechanistic modeling, where understanding the system requires us to characterize its parts, organization, and activities, and how those give rise to behaviors of interest. However, it remains controversial what it takes for a model to be mechanistic, and whether computational models such as neural networks qualify as explanatory on this approach.We argue that certain kinds of neural network models are actually good examples of mechanistic models, when an appropriate notion of mechanistic mapping is deployed. Building on existing work on model-to-mechanism mapping (3M), we describe criteria delineating such a notion, which we call 3M++. These criteria require us, first, to identify an abstract level of description that is still detailed enough to be “runnable”, and then, to construct model-to-brain mappings using the same principles as those employed for brain-to-brain mapping across individuals.Perhaps surprisingly, the abstractions required are just those already in use in experimental neuroscience and deployed in the construction of more familiar computational models – just as the principles of inter-brain mappings are very much in the spirit of those already employed in the collection and analysis of data across animals.In a companion paper, we address the relationship between optimization and intelligibility, in the context of functional evolutionary explanations. Taken together, mechanistic interpretations of computational models and the dependencies between form and function illuminated by optimization processes can help us to understand why brain systems are built they way they are.
更多
查看译文
关键词
Mechanism,Models,Explanation,Constraints,Similarity,Mapping,Abstraction,Functional abstraction,Neural networks,Computation,Philosophy,Vision,Constraint,Prediction,Transform,Levels of explanation,Mechanistic explanation,Neuroscience,Understanding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要