Diversified and Personalized Multi-rater Medical Image Segmentation
CVPR 2024(2024)
摘要
Annotation ambiguity due to inherent data uncertainties such as blurred
boundaries in medical scans and different observer expertise and preferences
has become a major obstacle for training deep-learning based medical image
segmentation models. To address it, the common practice is to gather multiple
annotations from different experts, leading to the setting of multi-rater
medical image segmentation. Existing works aim to either merge different
annotations into the "groundtruth" that is often unattainable in numerous
medical contexts, or generate diverse results, or produce personalized results
corresponding to individual expert raters. Here, we bring up a more ambitious
goal for multi-rater medical image segmentation, i.e., obtaining both
diversified and personalized results. Specifically, we propose a two-stage
framework named D-Persona (first Diversification and then Personalization). In
Stage I, we exploit multiple given annotations to train a Probabilistic U-Net
model, with a bound-constrained loss to improve the prediction diversity. In
this way, a common latent space is constructed in Stage I, where different
latent codes denote diversified expert opinions. Then, in Stage II, we design
multiple attention-based projection heads to adaptively query the corresponding
expert prompts from the shared latent space, and then perform the personalized
medical image segmentation. We evaluated the proposed model on our in-house
Nasopharyngeal Carcinoma dataset and the public lung nodule dataset (i.e.,
LIDC-IDRI). Extensive experiments demonstrated our D-Persona can provide
diversified and personalized results at the same time, achieving new SOTA
performance for multi-rater medical image segmentation. Our code will be
released at https://github.com/ycwu1997/D-Persona.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要