Structured Multimodal Attentions for TextVQA

IEEE Transactions on Pattern Analysis and Machine Intelligence(2022)

引用 63|浏览177
暂无评分
摘要
Text based Visual Question Answering (TextVQA) is a recently raised challenge requiring models to read text in images and answer natural language questions by jointly reasoning over the question, textual information and visual content. Introduction of this new modality - Optical Character Recognition (OCR) tokens ushers in demanding reasoning requirements. Most of the state-of-the-art (SoTA) VQA methods fail when answer these questions because of three reasons: (1) poor text reading ability; (2) lack of textual-visual reasoning capacity; and (3) choosing discriminative answering mechanism over generative couterpart (although this has been further addressed by M4C). In this paper, we propose an end-to-end structured multimodal attention (SMA) neural network to mainly solve the first two issues above. SMA first uses a structural graph representation to encode the object-object, object-text and text-text relationships appearing in the image, and then designs a multimodal graph attention network to reason over it. Finally, the outputs from the above modules are processed by a global-local attentional answering module to produce an answer splicing together tokens from both OCR and general vocabulary iteratively by following M4C. Our proposed model outperforms the SoTA models on TextVQA dataset and two tasks of ST-VQA dataset among all models except pre-training based TAP. Demonstrating strong reasoning ability, it also won first place in TextVQA Challenge 2020. We extensively test different OCR methods on several reasoning models and investigate the impact of gradually increased OCR performance on TextVQA benchmark. With better OCR results, different models share dramatic improvement over the VQA accuracy, but our model benefits most blessed by strong textual-visual reasoning ability. To grant our method an upper bound and make a fair testing base available for further works, we also provide human-annotated ground-truth OCR annotations for the TextVQA dataset, which were not given in the original release. The code and ground-truth OCR annotations for the TextVQA dataset are available at https://github.com/ChenyuGAO-CS/SMA .
更多
查看译文
关键词
Humans,Algorithms,Neural Networks, Computer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要