Combining Transformers with Natural Language Explanations

arxiv(2023)

引用 0|浏览21
暂无评分
摘要
Transformers changed modern NLP in many ways. However, like many other neural architectures, they are still weak on exploiting domain knowledge and on interpretability. Unfortunately, the exploitation of external, structured knowledge is notoriously prone to a knowledge acquisition bottleneck. We thus propose a memory enhancement of transformer models that makes use of unstructured knowledge. That, expressed in plain text, can be used to carry out classification tasks and as a source of explanations for the model output. An experimental evaluation conducted on two challenging datasets demonstrates that our approach produces relevant explanations without losing in performance.
更多
查看译文
关键词
natural language explanations,transformers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要