TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic Parallelisation

Ziji Shi,Le Jiang,Ang Wang, Jie Zhang,Xianyan Jia, Yong Li, Chencan Wu, Jialin Li,Wei Lin

CoRR(2023)

Cited 0|Views91
No score
Abstract
Model parallelism has become necessary to train large neural networks. However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space. In this work, we present a model parallelism framework TAP that automatically searches for the best data and tensor parallel schedules. Leveraging the key insight that a neural network can be represented as a directed acyclic graph, within which may only exist a limited set of frequent subgraphs, we design a graph pruning algorithm to fold the search space efficiently. TAP runs at sub-linear complexity concerning the neural network size. Experiments show that TAP is $20\times- 160\times$ faster than the state-of-the-art automatic parallelism framework, and the performance of its discovered schedules is competitive with the expert-engineered ones.
More
Translated text
Key words
tensor automatic
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined