ST-GAN: A Swin Transformer-Based Generative Adversarial Network for Unsupervised Domain Adaptation of Cross-Modality Cardiac Segmentation.

Yifan Zhang, Yonghui Wang,Lisheng Xu,Yudong Yao,Wei Qian,Lin Qi

IEEE journal of biomedical and health informatics(2024)

引用 0|浏览2
暂无评分
摘要
Unsupervised domain adaptation (UDA) methods have shown great potential in cross-modality medical image segmentation tasks, where target domain labels are unavailable. However, the domain shift among different image modalities remains challenging, because the conventional UDA methods are based on convolutional neural networks (CNNs), which tend to focus on the texture of images and cannot establish the global semantic relevance of features due to the locality of CNNs. This paper proposes a novel end-to-end Swin Transformer-based generative adversarial network (ST-GAN) for cross-modality cardiac segmentation. In the generator of ST-GAN, we utilize the local receptive fields of CNNs to capture spatial information and introduce the Swin Transformer to extract global semantic information, which enables the generator to better extract the domain-invariant features in UDA tasks. In addition, we design a multi-scale feature fuser to sufficiently fuse the features acquired at different stages and improve the robustness of the UDA network. We extensively evaluated our method with two cross-modality cardiac segmentation tasks on the MS-CMR 2019 dataset and the M&Ms dataset. The results of two different tasks show the validity of ST-GAN compared with the state-of-the-art cross-modality cardiac image segmentation methods.
更多
查看译文
关键词
Adversarial learning,cross-modality cardiac segmentation,multi-scale fusion,swin transformer,unsupervised domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要