Unsupervised Cross-Modality Adaptation via Dual Structural-Oriented Guidance for 3D Medical Image Segmentation

IEEE Transactions on Medical Imaging(2023)

引用 3|浏览12
暂无评分
摘要
Deep convolutional neural networks (CNNs) have achieved impressive performance in medical image segmentation; however, their performance could degrade significantly when being deployed to unseen data with heterogeneous characteristics. Unsupervised domain adaptation (UDA) is a promising solution to tackle this problem. In this work, we present a novel UDA method, named dual adaptation-guiding network (DAG-Net), which incorporates two highly effective and complementary structural-oriented guidance in training to collaboratively adapt a segmentation model from a labelled source domain to an unlabeled target domain. Specifically, our DAG-Net consists of two core modules: 1) Fourier-based contrastive style augmentation (FCSA) which implicitly guides the segmentation network to focus on learning modality-insensitive and structural-relevant features, and 2) residual space alignment (RSA) which provides explicit guidance to enhance the geometric continuity of the prediction in the target modality based on a 3D prior of inter-slice correlation. We have extensively evaluated our method with cardiac substructure and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our DAG-Net greatly outperforms the state-of-the-art UDA approaches for 3D medical image segmentation on unlabeled target images.
更多
查看译文
关键词
3d medical image segmentation,adaptation,cross-modality,structural-oriented
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要