Optimizing DNNs With Partially Equivalent Transformations and Automated Corrections

IEEE TRANSACTIONS ON COMPUTERS(2023)

引用 0|浏览15
暂无评分
摘要
Deep neural network (DNN) applications are typically represented by tensor programs. To boost the performance of DNN computations, existing works adopt fully equivalent transformations for tensor program optimization by guaranteeing the equivalence on each element of tensors. However, as there are thousands of elements in a tensor, such optimization misses the opportunities that allow the in-equivalence of minority elements. In this work, we propose Pet, the first work that introduces partially equivalent transformations to optimize tensor programs. To maintain the functional equivalence of tensor programs, Pet automatically finds and corrects the in-equivalent positions by leveraging the multi-linearity of DNN computations. Pet further uses a mutation manager to improve search efficiency. Evaluation results show that Pet can achieve up to 1.98 x and 2.20 x speedups on NVIDIA Tesla A100 and V100 respectively compared with existing DNN frameworks by introducing new optimization opportunities of partially equivalent transformations.
更多
查看译文
关键词
Tensors,Optimization,Shape,Kernel,Artificial neural networks,Generators,Task analysis,AI compiler,DNN optimization,tensor programs,partially equivalent transformation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要