WARP: Word-level Adversarial ReProgramming.

59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1(2021)

引用 215|浏览75
暂无评分
摘要
Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial re-programming, which extends earlier work on automatic prompt generation. Adversarial re-programming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leader-board of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, out-performing GPT-3 on two SuperGLUE tasks with just 32 training samples.
更多
查看译文
关键词
word-level
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要