Analyzing And Editing Inner Mechanisms Of Backdoored Language Models

arxiv(2023)

引用 0|浏览3
暂无评分
摘要
Recent advancements in interpretability research made transformer language models more transparent. This progress led to a better understanding of their inner workings for toy and naturally occurring models. However, how these models internally process sentiment changes has yet to be sufficiently answered. In this work, we introduce a new interpretability tool called PCP ablation, where we replace modules with low-rank matrices based on the principal components of their activations, reducing model parameters and their behavior to essentials. We demonstrate PCP ablations on MLP and attention layers in backdoored toy, backdoored large, and naturally occurring models. We determine MLPs as most important for the backdoor mechanism and use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements via PCP ablation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要