MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
arxiv(2024)
摘要
We present the design, implementation and engineering experience in building
and deploying MegaScale, a production system for training large language models
(LLMs) at the scale of more than 10,000 GPUs. Training LLMs at this scale
brings unprecedented challenges to training efficiency and stability. We take a
full-stack approach that co-designs the algorithmic and system components
across model block and optimizer design, computation and communication
overlapping, operator optimization, data pipeline, and network performance
tuning. Maintaining high efficiency throughout the training process (i.e.,
stability) is an important consideration in production given the long extent of
LLM training jobs. Many hard stability issues only emerge at large scale, and
in-depth observability is the key to address them. We develop a set of
diagnosis tools to monitor system components and events deep in the stack,
identify root causes, and derive effective techniques to achieve fault
tolerance and mitigate stragglers. MegaScale achieves 55.2
Utilization (MFU) when training a 175B LLM model on 12,288 GPUs, improving the
MFU by 1.34x compared to Megatron-LM. We share our operational experience in
identifying and fixing failures and stragglers. We hope by articulating the
problems and sharing our experience from a systems perspective, this work can
inspire future LLM systems research.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要