Prototyping of Low-Cost Configurable Sparse Neural Processing Unit with Buffer and Mixed-Precision Reshapeable MAC Array

2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS)(2023)

引用 0|浏览9
暂无评分
摘要
More recently, it has become possible to run deep learning algorithms on edge devices such as microcontrollers due to continuous improvements in neural network optimization algorithms such as quantization and neural architecture search. Nonetheless, most of the embedded hardware available today still falls short of the requirements of running deep neural networks. As a result, specialized processors have emerged to improve the inference efficiency of deep learning algorithms. However, most are not for edge applications that require efficient and low-cost hardware. Therefore, we design and prototype a low-cost configurable sparse Neural Processing Unit (NPU). The NPU has a built-in buffer and a reshapable mixed-precision multiply-accumulator (MAC) array. The computing and memory resources of the NPU are parameterized, and different NPUs can be derived. Besides, users can also conFigure the NPU at runtime to fully utilize the resources. In our experiments, the 200MHz NPU with only 32 MACs is more than 32 times faster than the 400MHzSTM32H7 when inferring MobileNet-Vl. Besides, the yielded NPUs can achieve roofline or even beyond roofline performance. The buffer and reshapeable MAC array push the NPU’s attainable performance to the roofline, while the feature of supporting sparsity allows the NPU to obtain performance beyond the roofline.
更多
查看译文
关键词
low-cost,configurable,neural processing unit,mixed-precision,sparsity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要