UCTNet: Uncertainty-guided CNN-Transformer hybrid networks for medical image segmentation

Pattern Recognition(2024)

引用 0|浏览2
暂无评分
摘要
Transformer, born for long-range dependency establishment, has been widely studied as a complementary of convolutional neural networks (CNNs) in medical image segmentation. However, existing CNN-Transformer hybrid approaches simply pursue implicit feature fusion without considering their underlying functional overlap. Medical images typically follow stable anatomical structures, making convolution capable of handling most segmentation targets. Without differentiation, enforcing transformers to operate self-attention for all image patches would result in severe redundancy, hindering global feature extraction. In this paper, we propose a simple yet effective hybrid network named UCTNet where transformers only focus on establishing global dependency for CNN’s unreliable regions predicted through uncertainty estimation. In this way, CNN and transformer are explicitly fused to minimize functional overlap. More importantly, with fewer regions to handle, UCTNet is of better convergence to learn more robust feature representations for hard examples. Extensive experiments on publicly-available datasets demonstrate the superiority of UCTNet against the state-of-the-art approaches, achieving 89.44%, 92.91%, and 91.15% in Dice similarity coefficient on Synapse, ACDC, and ISIC2018 respectively. Furthermore, such a CNN-Transformer hybrid strategy is highly extendable to other frameworks without introducing additional computational burdens. Code is available at https://github.com/innocence0206/UCTNet.
更多
查看译文
关键词
CNN-Transformer hybrid,Uncertainty,Functional overlap,Masked self-attention,Medical image segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要