A Language Model's Guide Through Latent Space
CoRR(2024)
摘要
Concept guidance has emerged as a cheap and simple way to control the
behavior of language models by probing their hidden representations for concept
vectors and using them to perturb activations at inference time. While the
focus of previous work has largely been on truthfulness, in this paper we
extend this framework to a richer set of concepts such as appropriateness,
humor, creativity and quality, and explore to what degree current detection and
guidance strategies work in these challenging settings. To facilitate
evaluation, we develop a novel metric for concept guidance that takes into
account both the success of concept elicitation as well as the potential
degradation in fluency of the guided model. Our extensive experiments reveal
that while some concepts such as truthfulness more easily allow for guidance
with current techniques, novel concepts such as appropriateness or humor either
remain difficult to elicit, need extensive tuning to work, or even experience
confusion. Moreover, we find that probes with optimal detection accuracies do
not necessarily make for the optimal guides, contradicting previous
observations for truthfulness. Our work warrants a deeper investigation into
the interplay between detectability, guidability, and the nature of the
concept, and we hope that our rich experimental test-bed for guidance research
inspires stronger follow-up approaches.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要