A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation

IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES(2024)

引用 0|浏览2
暂无评分
摘要
Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.
更多
查看译文
关键词
3-D medical image segmentation,contrastive learning,cross-modality medical image segmentation,self-training,unsupervised domain adaptation (UDA)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要