Memory-efficient Learning for Large-scale Computational Imaging

arxiv(2019)

引用 39|浏览17
暂无评分
摘要
Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems. Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of classical physics-based reconstructions (termed physics-based networks). However, for real-world large-scale systems, computing gradients via backpropagation restricts learning due to memory limitations of graphical processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network's layers to enable data-driven design for large-scale computational imaging. We demonstrate our methods practicality on two large-scale systems: super-resolution optical microscopy and multi-channel magnetic resonance imaging.
更多
查看译文
关键词
Backpropagation, Image reconstruction, Computational modeling, Memory management, Magnetic resonance imaging, Inverse problems, Fourier ptychographic microscopy, image reconstruction, iterative optimization, magnetic resonance imaging, memory-efficient backpropagation, physics-based learning, unrolled networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要