PairAug: What Can Augmented Image-Text Pairs Do for Radiology?
CVPR 2024(2024)
摘要
Current vision-language pre-training (VLP) methodologies predominantly depend
on paired image-text datasets, a resource that is challenging to acquire in
radiology due to privacy considerations and labelling complexities. Data
augmentation provides a practical solution to overcome the issue of data
scarcity, however, most augmentation methods exhibit a limited focus,
prioritising either image or text augmentation exclusively. Acknowledging this
limitation, our objective is to devise a framework capable of concurrently
augmenting medical image and text data. We design a Pairwise Augmentation
(PairAug) approach that contains an Inter-patient Augmentation (InterAug)
branch and an Intra-patient Augmentation (IntraAug) branch. Specifically, the
InterAug branch of our approach generates radiology images using synthesised
yet plausible reports derived from a Large Language Model (LLM). The generated
pairs can be considered a collection of new patient cases since they are
artificially created and may not exist in the original dataset. In contrast,
the IntraAug branch uses newly generated reports to manipulate images. This
process allows us to create new paired data for each individual with diverse
medical conditions. Our extensive experiments on various downstream tasks
covering medical image classification zero-shot and fine-tuning analysis
demonstrate that our PairAug, concurrently expanding both image and text data,
substantially outperforms image-/text-only expansion baselines and advanced
medical VLP baselines. Our code is released at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要