Transformer Interpretability Beyond Attention Visualization

2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2021)

引用 571|浏览497
暂无评分
摘要
Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks. In order to visualize the parts of the image that led to a certain classification, existing methods either rely on the obtained attention maps or employ heuristic propagation along the attention graph. In this work, we ...
更多
查看译文
关键词
Visualization,Computer vision,Head,Text categorization,Neural networks,Transformers,Pattern recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要