SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image Segmentation with Extremely Limited Annotations
CoRR(2023)
摘要
Semi-supervised learning has attracted much attention due to its less
dependence on acquiring abundant annotations from experts compared to fully
supervised methods, which is especially important for medical image
segmentation which typically requires intensive pixel/voxel-wise labeling by
domain experts. Although semi-supervised methods can improve the performance by
utilizing unlabeled data, there are still gaps between fully supervised methods
under extremely limited annotation scenarios. In this paper, we propose a
simple yet efficient strategy to explore the usage of the Segment Anything
Model (SAM) for enhancing semi-supervised medical image segmentation.
Concretely, the segmentation model trained with domain knowledge provides
information for localization and generating input prompts to the SAM. Then the
generated pseudo-labels of SAM are utilized as additional supervision to assist
in the learning procedure of the semi-supervised framework. Experimental
results demonstrate that SAM's assistance significantly enhances the
performance of existing semi-supervised frameworks, especially when only one or
a few labeled images are available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要