Densely connected swin-unet for multiscale information aggregation in medical image segmentation

Ziyang Wang, Meiwen Su,Jian-Qing Zheng, Yang Liu

2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP(2023)

引用 0|浏览0
暂无评分
摘要
Image semantic segmentation is a dense prediction task in computer vision that is dominated by deep learning techniques in recent years. UNet, which is a symmetric encoder-decoder end-to-end Convolutional Neural Network (CNN) with skip connections, has shown promising performance. Aiming to process the multiscale feature information efficiently, we propose a new Densely Connected Swin-UNet (DCS-UNet) with multiscale information aggregation for medical image segmentation. Firstly, inspired by Swin-Transformer to model long-range dependencies via shift-window-based self-attention, this work proposes the use of fully ViT-based network blocks with a shift-window approach, resulting in a purely self-attention-based U-shape segmentation network. The relevant layers including feature sampling and image tokenization are re-designed to align with the ViT fashion. Secondly, a full-scale deep supervision scheme is developed to process the aggregated feature map with various resolutions generated by different levels of decoders. Thirdly, dense skip connections are proposed that allow the semantic feature information to be thoroughly transferred from different levels of encoders to lower level decoders. Our proposed method is validated on a public benchmark MRI Cardiac segmentation data set with comprehensive validation metrics showing competitive performance against other variant encoder-decoder networks. The code is available at https://github.com/ziyangwang007/VIT4UNet.
更多
查看译文
关键词
Semantic Segmentation,UNet,Vision Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要