Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
EMNLP 2023(2024)
摘要
Prompt tuning, in which prompts are optimized to adapt large-scale
pre-trained language models to downstream tasks instead of fine-tuning the full
model parameters, has been shown to be particularly effective when the prompts
are trained in a multi-task transfer learning setting. These methods generally
involve individually training prompts for each source task and then aggregating
them to provide the initialization of the prompt for the target task. However,
this approach critically ignores the fact that some of the source tasks could
be negatively or positively interfering with each other. We argue that when we
extract knowledge from source tasks via training source prompts, we need to
consider this correlation among source tasks for better transfer to target
tasks. To this end, we propose a Bayesian approach where we work with the
posterior distribution of prompts across source tasks. We obtain representative
source prompts corresponding to the samples from the posterior utilizing Stein
Variational Gradient Descent, which are then aggregated to constitute the
initial target prompt. We show extensive experimental results on the standard
benchmark NLP tasks, where our Bayesian multi-task transfer learning approach
outperforms the state-of-the-art methods in many settings. Furthermore, our
approach requires no auxiliary models other than the prompt itself, achieving a
high degree of parameter efficiency.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要