LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations
arxiv(2024)
摘要
Interpretability tools that offer explanations in the form of a dialogue have
demonstrated their efficacy in enhancing users' understanding (Slack et al.,
2023; Shen et al., 2023), as one-off explanations may fall short in providing
sufficient information to the user. Current solutions for dialogue-based
explanations, however, often require external tools and modules and are not
easily transferable to tasks they were not designed for. With LLMCheckup, we
present an easily accessible tool that allows users to chat with any
state-of-the-art large language model (LLM) about its behavior. We enable LLMs
to generate explanations and perform user intent recognition without
fine-tuning, by connecting them with a broad spectrum of Explainable AI (XAI)
methods, including white-box explainability tools such as feature attributions,
and self-explanations (e.g., for rationale generation). LLM-based
(self-)explanations are presented as an interactive dialogue that supports
follow-up questions and generates suggestions. LLMCheckupprovides tutorials for
operations available in the system, catering to individuals with varying levels
of expertise in XAI and supporting multiple input modalities. We introduce a
new parsing strategy that substantially enhances the user intent recognition
accuracy of the LLM. Finally, we showcase LLMCheckup for the tasks of fact
checking and commonsense question answering.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要