Human-like Explanation for Text Classification With Limited Attention Supervision.

IEEE BigData(2021)

引用 1|浏览9
暂无评分
摘要
Human-like explanation for text classification is essential for high-impact settings such as healthcare where human rationales are required to support specialists' decisions. Conventional approaches learn explanations using attention mechanisms to assign heavy weights to words that have a high impact on a model's prediction. However, such heavily-weighted words often do not reflect human intuition. To advance human rationale, recent studies propose to supervise attention mechanisms assuming access to a huge set of attention labels collected from humans, called human attention maps (HAMs). Unfortunately, acquiring such HAMs for a huge dataset is very tedious, error-prone, and expensive in practice. Thus, we propose the novel problem of text classification with limited human attention supervision. Specifically, we study the learning of human-like attention weights from a dataset in which all documents contain classification labels but only a few documents provide HAMs. To this end, we design a deep learning architecture, HELAS: Human-like Explanation with Limited Attention Supervision to adaptively learn attention weights that focus on words analogous to a human with very limited attention supervision. HELAS effectively unifies joint learning improving both tasks of text classification and human-like explanation even with only insufficient supervision labels for the latter task. Our experiments show that HELAS generates attention maps similar to real human annotations raising similarity scores up to 22% over state-of-the-art alternatives, even with as little as 2% of the documents having HAMs. It concurrently improves text classification by driving accuracy up to 19% over four state-of-the-art methods.
更多
查看译文
关键词
Model Explainability,Text Classification,Joint Learning,Attention Mechanism
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要