SMaLL: Software for rapidly instantiating Machine Learning Libraries

ACM Transactions in Embedded Computing Systems(2023)

引用 0|浏览0
暂无评分
摘要
Interest in deploying deep neural network (DNN) inference on edge devices has resulted in an explosion of the number and types of hardware platforms that machine learning (ML) libraries must support. High-level programming interfaces, such as TensorFlow, can be readily ported across different devices; however, maintaining performance when porting the low-level implementation is more nuanced. High-performance inference implementations require an effective mapping of the high-level interface to the target hardware platform. Commonly, this mapping may use optimizing compilers to generate code at compile time or high-performance vendor libraries that have been specialized to the target platform. Both approaches rely on expert knowledge across levels to produce an efficient mapping. This makes supporting new architectures difficult and time-consuming. In this work, we present a DNN library framework, SMaLL, that is easily extensible to new architectures. The framework uses a unified loop structure and shared, cache-friendly data format across all intermediate layers, eliminating the time and memory overheads incurred by data transformation between layers. Each layer is implemented by specifying its dimensions and a kernel , the key computing operation of that layer. The unified loop structure and kernel abstraction allows the reuse of code across layers and computing platforms. New architectures only require a few hundred of lines in the kernel to be redesigned. To show the benefits of our approach, we have developed software that supports a range of layer types and computing platforms; this software is easily extensible for rapidly instantiating high-performance DNN libraries. An evaluation of the portability of our framework is shown by instantiating end-to-end networks from the MLPerf:tiny benchmark suite on five ARM platforms and one x86 platform (an AMD Zen 2). We also show that the end-to-end performance is comparable to or better than ML frameworks such as TensorFlow, TVM, and LibTorch.
更多
查看译文
关键词
machine learning,software
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要