Developing Explainable Deep Model for Discovering Novel Control Mechanism of Neuro-Dynamics

IEEE TRANSACTIONS ON MEDICAL IMAGING(2024)

引用 0|浏览43
暂无评分
摘要
Human brain is a complex system composed of many components that interact with each other. A well-designed computational model, usually in the format of partial differential equations (PDEs), is vital to understand the working mechanisms that can explain dynamic and self-organized behaviors. However, the model formulation and parameters are often tuned empirically based on the predefined domain-specific knowledge, which lags behind the emerging paradigm of discovering novel mechanisms from the unprecedented amount of spatiotemporal data. To address this limitation, we sought to link the power of deep neural networks and physics principles of complex systems, which allows us to design explainable deep models for uncovering the mechanistic role of how human brain (the most sophisticated complex system) maintains controllable functions while interacting with external stimulations. In the spirit of optimal control, we present a unified framework to design an explainable deep model that describes the dynamic behaviors of underlying neurobiological processes, allowing us to understand the latent control mechanism at a system level. We have uncovered the pathophysiological mechanism of Alzheimer's disease to the extent of controllability of disease progression, where the dissected system-level understanding enables higher prediction accuracy for disease progression and better explainability for disease etiology than conventional (black box) deep models.
更多
查看译文
关键词
Mathematical models,Complex systems,Brain modeling,Behavioral sciences,Optimal control,Biological system modeling,Spatiotemporal phenomena,Alzheimer's disease,graph neural networks,partial differential equations,A beta-tau interaction,systems biology
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要