BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling.

Javier de la Rosa,Eduardo G. Ponferrada,Manu Romero, Paulo Villegas, Pablo González de Prado Salas,María Grandury

Procesamiento del Lenguaje Natural(2022)

引用 43|浏览18
暂无评分
摘要
The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pre-training sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.
更多
查看译文
关键词
Pre-trained Language Models,Sampling Methods,Data-centric AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要