Scene Graph Generation by Iterative Message Passing

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 1301|浏览255
暂无评分
摘要
Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. The model solves the scene graph inference problem using standard RNNs and learns to iteratively improves its predictions via message passing. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods for generating scene graphs using Visual Genome dataset and inferring support relations with NYU Depth v2 dataset.
更多
查看译文
关键词
visual genome dataset,NYU depth V2 dataset,scene graph generation,dual edge graph,primal node graph,graph generation problem,structured scene representation,graphical structure,scene graphs,visual scene,iterative message passing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要