Fine-Grained Controllable Text Generation Using Non-Residual Prompting

PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)(2022)

引用 34|浏览105
暂无评分
摘要
The introduction of immensely large causal language models (CLMs) has rejuvenated the interest in open-ended text generation. However, controlling the generative process for these Transformer-based models is at large an unsolved problem. Earlier work has explored either plug-and-play decoding strategies or more powerful but blunt approaches such as prompting. There hence currently exists a trade-off between fine-grained control and the capability for more expressive high-level instructions. To alleviate this trade-off, we propose an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps. We propose a resource-efficient method for converting a pre-trained CLM into this architecture and demonstrate its potential in various experiments, including the novel task of contextualized word inclusion. Our method provides strong results in multiple experimental settings, proving itself to be both expressive and versatile.(1)
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要