Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$

Adam Roberts, Hyung Won Chung,Anselm Levskaya,Gaurav Mishra,James Bradbury,Daniel Andor,Sharan Narang,Brian Lester,Colin Gaffney,Afroz Mohiuddin,Curtis Hawthorne, Aitor Lewkowycz,Alex Salcianu,Marc van Zee,Jacob Austin, Sebastian Goodman,Livio Baldini Soares,Haitang Hu, Sasha Tsvyashchenko,Aakanksha Chowdhery,Jasmijn Bastings,Jannis Bulian, Xavier Garcia,Jianmo Ni,Andrew Chen,Kathleen Kenealy, Jonathan H. Clark, Stephan Lee,Dan Garrette,James Lee-Thorp, Colin Raffel,Noam Shazeer, Marvin Ritter, Maarten Bosma,Alexandre Passos, Jeremy Maitin-Shepard, Noah Fiedel, Mark Omernick,Brennan Saeta,Ryan Sepassi, Alexander Spiridonov,Joshua Newlan, Andrea Gesmundo


引用 0|浏览88
Recent neural network-based language models have benefited greatly from scaling up the size of training datasets and the number of parameters in the models themselves. Scaling can be complicated due to various factors including the need to distribute computation on supercomputer clusters (e.g., TPUs), prevent bottlenecks when infeeding data, and ensure reproducible results. In this work, we present two software libraries that ease these issues: $\texttt{t5x}$ simplifies the process of building and training large language models at scale while maintaining ease of use, and $\texttt{seqio}$ provides a task-based API for simple creation of fast and reproducible training data and evaluation pipelines. These open-source libraries have been used to train models with hundreds of billions of parameters on datasets with multiple terabytes of training data. Along with the libraries, we release configurations and instructions for T5-like encoder-decoder models as well as GPT-like decoder-only architectures. $\texttt{t5x}$ and $\texttt{seqio}$ are open source and available at and, respectively.
AI 理解论文