Interpreting Multi-Head Attention in Abstractive Summarization

Joris Baan,Maartje ter Hoeve,Marlies van der Wees,Anne Schuth, Maarten de Rijke

semanticscholar(2019)

引用 0|浏览0
暂无评分
摘要
Attention mechanisms in deep learning architectures have often been used as a means of transparency and, as such, to shed light on the inner workings of the architectures. Recently, there has been a growing interest in whether or not this assumption is correct. In this paper we investigate the interpretability of multihead attention in abstractive summarization, a sequence-to-sequence task for which attention does not have an intuitive alignment role, such as in machine translation. We first introduce three metrics to gain insight in the focus of attention heads and observe that these heads specialize towards relative positions, specific part-of-speech tags, and named entities. However, we also find that ablating and pruning these heads does not lead to a significant drop in performance, indicating redundancy. By replacing the softmax activation functions with a sparsemax activation functions we find that attention heads behave seemingly more transparent: we can ablate fewer heads and the heads score higher on our interpretability metrics. However, if we apply pruning to the sparsemax model, we find that we can prune even more heads this time, raising the question whether enforced sparsity actually improves transparency. Finally, we find that heads focused on relative positions seem integral to summarization performance and persistently remain after pruning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要