Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation
CoRR(2023)
摘要
Evaluating text-to-image models is notoriously difficult. A strong recent
approach for assessing text-image faithfulness is based on QG/A (question
generation and answering), which uses pre-trained foundational models to
automatically generate a set of questions and answers from the prompt, and
output images are scored based on whether these answers extracted with a visual
question answering model are consistent with the prompt-based answers. This
kind of evaluation is naturally dependent on the quality of the underlying QG
and QA models. We identify and address several reliability challenges in
existing QG/A work: (a) QG questions should respect the prompt (avoiding
hallucinations, duplications, and omissions) and (b) VQA answers should be
consistent (not asserting that there is no motorcycle in an image while also
claiming the motorcycle is blue). We address these issues with Davidsonian
Scene Graph (DSG), an empirically grounded evaluation framework inspired by
formal semantics, which is adaptable to any QG/A frameworks. DSG produces
atomic and unique questions organized in dependency graphs, which (i) ensure
appropriate semantic coverage and (ii) sidestep inconsistent answers. With
extensive experimentation and human evaluation on a range of model
configurations (LLM, VQA, and T2I), we empirically demonstrate that DSG
addresses the challenges noted above. Finally, we present DSG-1k, an
open-sourced evaluation benchmark that includes 1,060 prompts, covering a wide
range of fine-grained semantic categories with a balanced distribution. We
release the DSG-1k prompts and the corresponding DSG questions.
更多查看译文
关键词
davidsonian scene graph,generation,fine-grained,text-to-image
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要