Fine-tuning a Multiple Instance Learning Feature Extractor with Masked Context Modelling and Knowledge Distillation
CoRR(2024)
摘要
The first step in Multiple Instance Learning (MIL) algorithms for Whole Slide
Image (WSI) classification consists of tiling the input image into smaller
patches and computing their feature vectors produced by a pre-trained feature
extractor model. Feature extractor models that were pre-trained with
supervision on ImageNet have proven to transfer well to this domain, however,
this pre-training task does not take into account that visual information in
neighboring patches is highly correlated. Based on this observation, we propose
to increase downstream MIL classification by fine-tuning the feature extractor
model using Masked Context Modelling with Knowledge Distillation. In
this task, the feature extractor model is fine-tuned by predicting masked
patches in a bigger context window. Since reconstructing the input image would
require a powerful image generation model, and our goal is not to generate
realistically looking image patches, we predict instead the feature vectors
produced by a larger teacher network. A single epoch of the proposed task
suffices to increase the downstream performance of the feature-extractor model
when used in a MIL scenario, even capable of outperforming the downstream
performance of the teacher model, while being considerably smaller and
requiring a fraction of its compute.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要