11.4 IBM NorthPole: An Architecture for Neural Network Inference with a 12nm Chip

Andrew S. Cassidy,John V. Arthur,Filipp Akopyan,Alexander Andreopoulos,Rathinakumar Appuswamy, Pallab Datta, Michael V. Debole,Steven K. Esser, Carlos Ortega Otero,Jun Sawada,Brian Taba,Arnon Amir,Deepika Bablani, Peter J. Carlson,Myron D. Flickner, Rajamohan Gandhasri,Guillaume J. Garreau,Megumi Ito, Jennifer L. Klamo, Jeffrey A. Kusnitz, Nathaniel J. McClatchey, Jeffrey L. McKinstry,Yutaka Nakamura,Tapan K. Nayak, William P. Risk,Kai Schleupen,Ben Shaw, Jay Sivagnaname,Daniel F. Smith,Ignacio Terrizzano,Takanori Ueda,Dharmendra Modha

2024 IEEE International Solid-State Circuits Conference (ISSCC)(2024)

引用 0|浏览5
暂无评分
摘要
The Deep Neural Network (DNN) era was ushered in by the triad of algorithms, big data, and more powerful hardware processors for training large-scale neural networks. Now, the ubiquitous deployment of DNNs for neural inference in edge, embedded, and data center applications demands more power-efficient hardware processors, while attaining increasingly higher computational performance. To address this Inference Challenge, we developed the NorthPole Architecture and implemented a NorthPole Chip instantiation [1, 2].
更多
查看译文
关键词
Neural Network,North Pole,Deep Neural Network,Big Data,Weight Matrix,Functional Unit,Heat Sink,Computing Units,Cardinal Directions,Partial Sums,Precise Selection,Input Tensor,ResNet Model,Regeneration Buffer,Software Development Kit,Output Tensor,Voltage Scaling,Off-chip Memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要