Evaluating Concept-based Explanations of Language Models: A Study on Faithfulness and Readability
CoRR(2024)
摘要
Despite the surprisingly high intelligence exhibited by Large Language Models
(LLMs), we are somehow intimidated to fully deploy them into real-life
applications considering their black-box nature. Concept-based explanations
arise as a promising avenue for explaining what the LLMs have learned, making
them more transparent to humans. However, current evaluations for concepts tend
to be heuristic and non-deterministic, e.g. case study or human evaluation,
hindering the development of the field. To bridge the gap, we approach
concept-based explanation evaluation via faithfulness and readability. We first
introduce a formal definition of concept generalizable to diverse concept-based
explanations. Based on this, we quantify faithfulness via the difference in the
output upon perturbation. We then provide an automatic measure for readability,
by measuring the coherence of patterns that maximally activate a concept. This
measure serves as a cost-effective and reliable substitute for human
evaluation. Finally, based on measurement theory, we describe a meta-evaluation
method for evaluating the above measures via reliability and validity, which
can be generalized to other tasks as well. Extensive experimental analysis has
been conducted to validate and inform the selection of concept evaluation
measures.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要