Mixtures of Experts Unlock Parameter Scaling for Deep RL
CoRR(2024)
摘要
The recent rapid progress in (self) supervised learning models is in large
part predicted by empirical scaling laws: a model's performance scales
proportionally to its size. Analogous scaling laws remain elusive for
reinforcement learning domains, however, where increasing the parameter count
of a model often hurts its final performance. In this paper, we demonstrate
that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs
(Puigcerver et al., 2023), into value-based networks results in more
parameter-scalable models, evidenced by substantial performance increases
across a variety of training regimes and model sizes. This work thus provides
strong empirical evidence towards developing scaling laws for reinforcement
learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要