Beyond Image-Text Matching: Verb Understanding in Multimodal Transformers Using Guided Masking
CoRR(2024)
摘要
The dominant probing approaches rely on the zero-shot performance of
image-text matching tasks to gain a finer-grained understanding of the
representations learned by recent multimodal image-language transformer models.
The evaluation is carried out on carefully curated datasets focusing on
counting, relations, attributes, and others. This work introduces an
alternative probing strategy called guided masking. The proposed approach
ablates different modalities using masking and assesses the model's ability to
predict the masked word with high accuracy. We focus on studying multimodal
models that consider regions of interest (ROI) features obtained by object
detectors as input tokens. We probe the understanding of verbs using guided
masking on ViLBERT, LXMERT, UNITER, and VisualBERT and show that these models
can predict the correct verb with high accuracy. This contrasts with previous
conclusions drawn from image-text matching probing techniques that frequently
fail in situations requiring verb understanding. The code for all experiments
will be publicly available https://github.com/ivana-13/guided_masking.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要