SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)(2022)

引用 206|浏览482
暂无评分
摘要
There has been growing interest in parameter-efficient methods to apply pre-trained language models to downstream tasks. Building on the PROMPTTUNING approach of Lester et al. (2021), which learns task-specific soft prompts to condition a frozen pre-trained model to perform different tasks, we propose a novel prompt-based transfer learning approach called SPOT: Soft Prompt Transfer. SPOT first learns a prompt on one or more source tasks and then uses it to initialize the prompt for a target task. We show that SPOT significantly boosts the performance of PROMPT-TUNING across many tasks. More remarkably, across all model sizes, SPOT matches or outperforms standard MODELTUNING (which fine-tunes all model parameters) on the SUPERGLUE benchmark, while using up to 27,000X fewer task-specific parameters. To understand where SPOT is most effective, we conduct a large-scale study on task transferability with 26 NLP tasks in 160 combinations, and demonstrate that many tasks can benefit each other via prompt transfer. Finally, we propose an efficient retrieval approach that interprets task prompts as task embeddings to identify similar tasks and predict the most transferable source tasks for a novel target task.
更多
查看译文
关键词
better frozen model adaptation,transfer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要