Optimizing Distributed Training on Frontier for Large Language Models
ISC High Performance(2024)
Key words
Large Language Models,Model Parameters,Throughput,Computational Efficiency,Hyperparameter Tuning,Parallel Data,Scale Efficiency,Foundation Model,Communication Latency,Strong Scaling,Smaller Counterparts,Weak Efficiency,Parallel Training,Parallelization,Model Size,Feed-forward Network,Training Performance,Frequent Communication,Distribution Strategy,Forward Pass,SHapley Additive exPlanations,Pipeline Stages,Training Framework,Bubble Size,Just-in-time,Single GPU,Backward Propagation,Oak Ridge National Laboratory,NVIDIA GPU,Attention Block
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined