Anatomically-Controllable Medical Image Generation with Segmentation-Guided Diffusion Models
CoRR(2024)
Abstract
Diffusion models have enabled remarkably high-quality medical image
generation, which can help mitigate the expenses of acquiring and annotating
new images by supplementing small or imbalanced datasets, along with other
applications. However, these are hampered by the challenge of enforcing global
anatomical realism in generated images. To this end, we propose a diffusion
model for anatomically-controlled medical image generation. Our model follows a
multi-class anatomical segmentation mask at each sampling step and incorporates
a random mask ablation training algorithm, to enable conditioning on a
selected combination of anatomical constraints while allowing flexibility in
other anatomical areas. This also improves the network's learning of anatomical
realism for the completely unconditional (unconstrained generation) case.
Comparative evaluation on breast MRI and abdominal/neck-to-pelvis CT datasets
demonstrates superior anatomical realism and input mask faithfulness over
state-of-the-art models. We also offer an accessible codebase and release a
dataset of generated paired breast MRIs. Our approach facilitates diverse
applications, including pre-registered image generation, counterfactual
scenarios, and others.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined