GenEARL: A Training-Free Generative Framework for Multimodal Event Argument Role Labeling
arxiv(2024)
摘要
Multimodal event argument role labeling (EARL), a task that assigns a role
for each event participant (object) in an image is a complex challenge. It
requires reasoning over the entire image, the depicted event, and the
interactions between various objects participating in the event. Existing
models heavily rely on high-quality event-annotated training data to understand
the event semantics and structures, and they fail to generalize to new event
types and domains. In this paper, we propose GenEARL, a training-free
generative framework that harness the power of the modern generative models to
understand event task descriptions given image contexts to perform the EARL
task. Specifically, GenEARL comprises two stages of generative prompting with a
frozen vision-language model (VLM) and a frozen large language model (LLM).
First, a generative VLM learns the semantics of the event argument roles and
generates event-centric object descriptions based on the image. Subsequently, a
LLM is prompted with the generated object descriptions with a predefined
template for EARL (i.e., assign an object with an event argument role). We show
that GenEARL outperforms the contrastive pretraining (CLIP) baseline by 9.4
and 14.2
respectively. In addition, we outperform CLIP-Event by 22
dataset. The framework also allows flexible adaptation and generalization to
unseen domains.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要