Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Jack W. Rae,Sebastian Borgeaud,Trevor Cai,Katie Millican,Jordan Hoffmann,Francis Song,John Aslanides,Sarah B. Henderson,Roman Ring, Susannah Young, Eliza Rutherford,Tom Hennigan,Jacob Menick, Albin Cassirer, Richard J. Powell,George van den Driessche,Lisa Anne Hendricks,Maribeth Rauh, Po-Sen Huang,Amelia Glaese,Johannes Welbl,Sumanth Dathathri,Saffron Huang,Jonathan Uesato,John W. Mellor,Irina Higgins,Antonia Creswell,Nat McAleese, Amy Wu,Erich Elsen,Siddhant M. Jayakumar,Elena Buchatskaya,David Budden, Esme Sutherland,Karen Simonyan, M. Paganini,Laurent Sifre, Lena Martens, Xiang Lorraine Li,Adhiguna Kuncoro,Aida Nematzadeh,Elena Gribovskaya,Domenic Donato,Angeliki Lazaridou, Michel Arthur,Jean-Baptiste Lespiau,Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhongying Gong, Daniel Toyama, Cyprien de Masson d’Autume,Yujia Li, Tayfun Terzi,Vladimir Mikulik, I. Babuschkin,Aidan Clark,Diego de Las Casas, Aurelia Guy, Chris Jones,James T. Bradbury, Matthew S. Johnson, Blake A. Hechtman,Laura Weidinger,Iason Gabriel,William M. Isaac, Ed Lockhart,Simon Osindero,Laura Rimell,Chris Dyer,Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett,Demis Hassabis,Koray Kavukcuoglu,Geoffrey Irving

arXiv (Cornell University)(2021)

引用 0|浏览26
暂无评分
摘要
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales -- from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. These models are evaluated on 152 diverse tasks, achieving state-of-the-art performance across the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language, but logical and mathematical reasoning see less benefit. We provide a holistic analysis of the training dataset and model's behaviour, covering the intersection of model scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the mitigation of downstream harms.
更多
查看译文
关键词
language models,training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要