Future Lens: Anticipating Subsequent Tokens from a Single Hidden State.
CoRR(2023)
摘要
We conjecture that hidden state vectors corresponding to individual input
tokens encode information sufficient to accurately predict several tokens
ahead. More concretely, in this paper we ask: Given a hidden (internal)
representation of a single token at position $t$ in an input, can we reliably
anticipate the tokens that will appear at positions $\geq t + 2$? To test this,
we measure linear approximation and causal intervention methods in GPT-J-6B to
evaluate the degree to which individual hidden states in the network contain
signal rich enough to predict future hidden states and, ultimately, token
outputs. We find that, at some layers, we can approximate a model's output with
more than 48% accuracy with respect to its prediction of subsequent tokens
through a single hidden state. Finally we present a "Future Lens" visualization
that uses these methods to create a new view of transformer states.
更多查看译文
关键词
anticipating subsequent tokens,future lens,state
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要