A Frustratingly Simple Decoding Method for Neural Text Generation

CoRR(2023)

引用 0|浏览47
暂无评分
摘要
We introduce a frustratingly simple, super efficient and surprisingly effective decoding method, which we call Frustratingly Simple Decoding (FSD), for neural text generation. The idea behind FSD is straightforward: we build an anti-LM based on previously generated text and use this anti-LM to penalize future generation of what has been generated. The anti-LM can be implemented as simple as an n-gram language model or a vectorized variant. In this way, FSD introduces no extra model parameters and negligible computational overhead (FSD can be as fast as greedy search). Despite the simplicity, FSD is surprisingly effective; Experiments show that FSD can outperform the canonical methods to date (i.e., nucleus sampling) as well as several strong baselines that were proposed recently.
更多
查看译文
关键词
simple decoding method,text,generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络