SMoP: Towards Efficient and Effective Prompt Tuning with Sparse Mixture-of-Prompts.

Joon-Young Choi,Junho Kim,Jun-Hyung Park, Wing-Lam Mok,SangKeun Lee

EMNLP 2023(2023)

引用 0|浏览14
暂无评分
摘要
Prompt tuning has emerged as a successful parameter-efficient alternative to the full fine-tuning of language models. However, prior works on prompt tuning often utilize long soft prompts of up to 100 tokens to improve performance, overlooking the inefficiency associated with extended inputs. In this paper, we propose a novel prompt tuning method $SMoP$ ($S$parse $M$ixture-$o$f-$P$rompts) that utilizes short soft prompts for efficient training and inference while maintaining performance gains typically induced from longer soft prompts. To achieve this, $SMoP$ employs a gating mechanism to train multiple short soft prompts specialized in handling different subsets of the data, providing an alternative to relying on a single long soft prompt to cover the entire data. Experimental results demonstrate that $SMoP$ outperforms baseline methods while reducing training and inference costs. We release our code at https://github.com/jyjohnchoi/SMoP.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要