Text-to-Text Pre-Training for Data-to-Text Tasks.

INLG(2020)

引用 172|浏览736
暂无评分
摘要
We study the pre-train + fine-tune strategy for data-to-text tasks. Fine-tuning T5 achieves state-of-the-art results on the WebNLG, MultiWoz and ToTTo benchmarks. Moreover, the models are fully end-to-end and do not rely on any intermediate planning steps, delexicalization or copy mechanisms. T5 pre-training also enables stringer generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as pre-training becomes ever more prevalent for data-to-text tasks.
更多
查看译文
关键词
tasks,text-to-text,pre-training,data-to-text
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要