Answer Distillation For Visual Question Answering

COMPUTER VISION - ACCV 2018, PT I(2018)

引用 0|浏览36
暂无评分
摘要
Answering open-ended questions in Visual Question Answering (VQA) is a challenging task. As the answers are totally free-form, the answer space for open-ended questions is infinite in theory. This increases the difficulty for algorithms to predict the correct answers. In this paper, we propose a method named answer distillation to decrease the scale of answer space and limit the correct result into a small set of answer candidates. Specifically, we design a two-stage architecture to answer a question: First, we develop an answer distillation network to distill the answers, converting an open-ended question to a multiple-choice one with a short list of answer candidates. Then, we make full use of the knowledge from the answer candidates to guide the visual attention and refine the prediction results. Extensive experiments are conducted to validate the effectiveness of our answer distillation architecture. The results show that our method can effectively compress the answer space and improve the accuracy on open-ended task, providing a new state-of-the-art performance on COCO-VQA dataset.
更多
查看译文
关键词
Answer distillation, Visual question answering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要