Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-In-Memory Architectures

IEEE Transactions on Circuits and Systems I-regular Papers(2020)

引用 49|浏览20
暂无评分
摘要
Recent state-of-the-art deep convolutional neural networks (CNNs) have shown remarkable success in current intelligent systems for various tasks, such as image/speech recognition and classification. A number of recent efforts have attempted to design custom inference engines based on processing-in-memory (PIM) architecture, where the memory array is used for weighted sum computation, thereby avoiding the frequent data transfer between buffers and computation units. Prior PIM designs typically unroll each 3D kernel of the convolutional layers into a vertical column of a large weight matrix, where the input data needs to be accessed multiple times. In this paper, in order to maximize both weight and input data reuse for PIM architecture, we propose a novel weight mapping method and the corresponding data flow which divides the kernels and assign the input data into different processing-elements (PEs) according to their spatial locations. As a case study, resistive random access memory (RRAM) based 8-bit PIM design at 32 nm is benchmarked. The proposed mapping method and data flow yields $\sim 2.03\times $ speed up and $\sim 1.4\times $ improvement in throughput and energy efficiency for ResNet-34, compared with the prior design based on the conventional mapping method. To further optimize the hardware performance and throughput, we propose an optimal pipeline architecture, with ~50% area overhead, it achieves overall $913\times $ and $1.96\times $ improvement in throughput and energy efficiency, which are 132476 FPS and 20.1 TOPS/W, respectively.
更多
查看译文
关键词
Kernel,Random access memory,Three-dimensional displays,Arrays,Throughput,System-on-chip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要