UNSUPERVISED ENSEMBLE DISTILLATION FOR MULTI-ORGAN SEGMENTATION

2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022)(2022)

引用 2|浏览13
暂无评分
摘要
Multi-organ segmentation is a fundamental task in medical image processing. This paper explores a novel privacy friendly setting to train a multi-organ segmentation model, i.e., learning directly from multiple pre-trained single-organ segmentation models. We formulate it into a special unsupervised ensemble distillation problem. To solve the problem, a multi-teacher knowledge distillation framework is proposed, which leverages the soft labels predicted by pre-trained teacher models to train a student model, i.e., the multi-organ segmentation model. Considering the difference of the teachers in task speciality, the pseudo labels for each organ come from the corresponding teacher, and those of the background region are from all teachers. To deal with the problem of unmatched dimensionality between the teacher models and the student model, an output transformation method is introduced. The entire learning process requires only access to pre-trained models and a reasonable set of unsupervised target data, achieving a good balance between the privacy protection and model performance. Experimental results on a widely adopted benchmark dataset have shown the promise of the proposed method.
更多
查看译文
关键词
Semantic segmentation, multi-organ, knowledge distillation, unsupervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要