AINNS: All-Inclusive Neural Network Scheduling Via Accelerator Formalization

IEEE Transactions on Computers(2022)

引用 0|浏览8
暂无评分
摘要
Driven by the rapid development of accelerators and diverse efficiency requirements of the naturally heterogeneous neural network computation, recent years have seen increased heterogeneity in neural network accelerator systems in terms of network structures, accelerator dataflows and implementations. However, existing research fails to schedule and map the heterogeneous neural networks on heterogeneous accelerators efficiently. They rely on clumpy exhaustive search or complicated ad hoc mapping approaches due to the semantic gap between the networks and accelerators. This paper proposes a systematic method to transform various accelerators into standard parameterized containers of the neural network loops, which builds a direct connection between the computation and the underlying hardware resources. This enables us to match the neural networks with accelerators based on their essential characteristics (e.g., reuse opportunities and bandwidth requirements) without diving into the detailed architectures. To this end, we propose AINNS, an all-inclusive neural network scheduler, that automatically schedules and maps the NN computation on heterogeneous accelerators with just one universal algorithm. Our experimental results show the proposed AINNS not only performs well in the traditional neural network acceleration but also improves the system throughput and energy efficiency by 1.8x and 1.7x respectively in the most challenging heterogeneous acceleration system.
更多
查看译文
关键词
Neural network,accelerator formalization,heterogeneity,scheduling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要