Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers

2021 IEEE/CVF International Conference on Computer Vision (ICCV)(2021)

引用 196|浏览550
暂无评分
摘要
Transformers are increasingly dominating multi-modal reasoning tasks, such as visual question answering, achieving state-of-the-art results thanks to their ability to contextualize information using the self-attention and co-attention mechanisms. These attention modules also play a role in other computer vision tasks including object detection and image segmentation. Unlike Transformers that only ...
更多
查看译文
关键词
Measurement,Computer vision,Visualization,Image segmentation,Computational modeling,Computer architecture,Object detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要