Learning to Route Among Specialized Experts for Zero-Shot Generalization
CoRR(2024)
摘要
Recently, there has been a widespread proliferation of "expert" language
models that are specialized to a specific task or domain through
parameter-efficient fine-tuning. How can we recycle large collections of expert
language models to improve zero-shot generalization to unseen tasks? In this
work, we propose Post-Hoc Adaptive Tokenwise Gating Over an Ocean of
Specialized Experts (PHATGOOSE), which learns to route among specialized
modules that were produced through parameter-efficient fine-tuning. Unlike past
methods that learn to route among specialized models, PHATGOOSE explores the
possibility that zero-shot generalization will be improved if different experts
can be adaptively chosen for each token and at each layer in the model.
Crucially, our method is post-hoc - it does not require simultaneous access to
the datasets used to create the specialized models and only requires a modest
amount of additional compute after each expert model is trained. In experiments
covering a range of specialized model collections and zero-shot generalization
benchmarks, we find that PHATGOOSE outperforms past methods for post-hoc
routing and, in some cases, outperforms explicit multitask training (which
requires simultaneous data access). To better understand the routing strategy
learned by PHATGOOSE, we perform qualitative experiments to validate that
PHATGOOSE's performance stems from its ability to make adaptive per-token and
per-module expert choices. We release all of our code to support future work on
improving zero-shot generalization by recycling specialized experts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要