Pretraining Vision-Language Model for Difference Visual Question Answering in Longitudinal Chest X-rays
CoRR(2024)
摘要
Difference visual question answering (diff-VQA) is a challenging task that
requires answering complex questions based on differences between a pair of
images. This task is particularly important in reading chest X-ray images
because radiologists often compare multiple images of the same patient taken at
different times to track disease progression and changes in its severity in
their clinical practice. However, previous works focused on designing specific
network architectures for the diff-VQA task, missing opportunities to enhance
the model's performance using a pretrained vision-language model (VLM). Here,
we introduce a novel VLM called PLURAL, which is pretrained on natural and
longitudinal chest X-ray data for the diff-VQA task. The model is developed
using a step-by-step approach, starting with being pretrained on natural images
and texts, followed by being trained using longitudinal chest X-ray data. The
longitudinal data consist of pairs of X-ray images, along with question-answer
sets and radiologist's reports that describe the changes in lung abnormalities
and diseases over time. Our experimental results show that the PLURAL model
outperforms state-of-the-art methods not only in diff-VQA for longitudinal
X-rays but also in conventional VQA for a single X-ray image. Through extensive
experiments, we demonstrate the effectiveness of the proposed VLM architecture
and pretraining method in improving the model's performance.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要