Visualizing Transformers for NLP: A Brief Survey

2020 24th International Conference Information Visualisation (IV)(2020)

引用 28|浏览18
暂无评分
摘要
The introduction of Transformer neural networks has changed the landscape of Natural Language Processing during the last three years. While models inspired by it have managed to lead the boards for a variety of tasks, some of the mechanisms through which these performances were achieved are not necessarily well-understood. Our survey is focused mostly on explaining Transformer architectures through visualizations. Since visualization enables some degree of explainability, we have examined the various Transformer facets that can be explored through visual analytics. The field is still at a nascent stage and is expected to witness dynamic growth in the near future, since the results are already interesting and promising. Currently, some of the visualizations are relatively close to their original models, whereas others are model-agnostic. The visualizations designed to explore the Transformer architectures enable some additional features, like exploration of all neuronal cells or attention maps, therefore providing an advantage for this particular task. We conclude by proposing a set of requirements for future Transformer visualization frameworks.
更多
查看译文
关键词
Natural Language Processing,Transformers,BERT,Attention Maps,Visualization of Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要