TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic Parallelisation

Ziji Shi,Le Jiang,Ang Wang, Jie Zhang,Xianyan Jia,Yong Li, Chencan Wu, Jialin Li,Wei Lin

arxiv(2023)

引用 0|浏览77
暂无评分
摘要
Model parallelism has become necessary to train large neural networks. However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space. In this work, we present a model parallelism framework TAP that automatically searches for the best data and tensor parallel schedules. Leveraging the key insight that a neural network can be represented as a directed acyclic graph, within which may only exist a limited set of frequent subgraphs, we design a graph pruning algorithm to fold the search space efficiently. TAP runs at sub-linear complexity concerning the neural network size. Experiments show that TAP is $20\times- 160\times$ faster than the state-of-the-art automatic parallelism framework, and the performance of its discovered schedules is competitive with the expert-engineered ones.
更多
查看译文
关键词
tensor automatic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要