Data streaming and traffic gathering in mesh-based NoC for deep neural network acceleration

Journal of Systems Architecture(2022)

引用 2|浏览38
暂无评分
摘要
The increasing popularity of deep neural network (DNN) applications demands high computing power and efficient hardware accelerator architecture. DNN accelerators use a large number of processing elements (PEs) and on-chip memory for storing weights and other parameters. As the communication backbone of a DNN accelerator, networks-on-chip (NoC) play an important role in supporting various dataflow patterns and enabling processing with communication parallelism in a DNN accelerator. However, the widely used mesh-based NoC architectures inherently cannot support the efficient one-to-many and many-to-one traffic largely existing in DNN workloads. In this paper, we propose a modified mesh architecture with a one-way/two-way streaming bus to speedup one-to-many (multicast) traffic, and the use of gather packets to support many-to-one (gather) traffic. The analysis of the runtime latency of a convolutional layer shows that the two-way streaming architecture achieves better improvement than the one-way streaming architecture for an Output Stationary (OS) dataflow architecture. The simulation results demonstrate that the gather packets can reduce the runtime latency up to 1.8 times and network power consumption up to 1.7 times, compared to the repetitive unicast method on modified mesh architectures supporting two-way streaming. Furthermore, the comparison with state-of-the-art mesh-based accelerator shows that the proposed gather supporting scheme has the advantages in both area efficiency and power efficiency.
更多
查看译文
关键词
NoC,DNN,Accelerators,Collective communication,Neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要