MicroDiffusion: Implicit Representation-Guided Diffusion for 3D Reconstruction from Limited 2D Microscopy Projections
CVPR 2024(2024)
摘要
Volumetric optical microscopy using non-diffracting beams enables rapid
imaging of 3D volumes by projecting them axially to 2D images but lacks crucial
depth information. Addressing this, we introduce MicroDiffusion, a pioneering
tool facilitating high-quality, depth-resolved 3D volume reconstruction from
limited 2D projections. While existing Implicit Neural Representation (INR)
models often yield incomplete outputs and Denoising Diffusion Probabilistic
Models (DDPM) excel at capturing details, our method integrates INR's
structural coherence with DDPM's fine-detail enhancement capabilities. We
pretrain an INR model to transform 2D axially-projected images into a
preliminary 3D volume. This pretrained INR acts as a global prior guiding
DDPM's generative process through a linear interpolation between INR outputs
and noise inputs. This strategy enriches the diffusion process with structured
3D information, enhancing detail and reducing noise in localized 2D images. By
conditioning the diffusion model on the closest 2D projection, MicroDiffusion
substantially enhances fidelity in resulting 3D reconstructions, surpassing INR
and standard DDPM outputs with unparalleled image quality and structural
fidelity. Our code and dataset are available at
https://github.com/UCSC-VLAA/MicroDiffusion.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要