DeiSAM: Segment Anything with Deictic Prompting
CoRR(2024)
摘要
Large-scale, pre-trained neural networks have demonstrated strong
capabilities in various tasks, including zero-shot image segmentation. To
identify concrete objects in complex scenes, humans instinctively rely on
deictic descriptions in natural language, i.e., referring to something
depending on the context such as "The object that is on the desk and behind the
cup.". However, deep learning approaches cannot reliably interpret such deictic
representations due to their lack of reasoning capabilities in complex
scenarios. To remedy this issue, we propose DeiSAM – a combination of large
pre-trained neural networks with differentiable logic reasoners – for deictic
promptable segmentation. Given a complex, textual segmentation description,
DeiSAM leverages Large Language Models (LLMs) to generate first-order logic
rules and performs differentiable forward reasoning on generated scene graphs.
Subsequently, DeiSAM segments objects by matching them to the logically
inferred image regions. As part of our evaluation, we propose the Deictic
Visual Genome (DeiVG) dataset, containing paired visual input and complex,
deictic textual prompts. Our empirical results demonstrate that DeiSAM is a
substantial improvement over purely data-driven baselines for deictic
promptable segmentation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要