TALIPOT: Energy-Efficient DNN Booster Employing Hybrid Bit Parallel-Serial Processing in MSB-First Fashion

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2022)

引用 2|浏览7
暂无评分
摘要
We propose a novel hybrid bit parallel-serial processing technique called TALIPOT in order to reduce energy consumption of deep neural networks (DNNs). TALIPOT works as computation booster and has inborn ability of quality adjusting tradeoff between accuracy and energy consumption of DNNs. The core principal of TALIPOT is keeping the serial number of bits same in the entire computing process, so the energy is consumed effectively without extra waiting times between inputs and outputs of the computing blocks (adders, multipliers, and activation blocks). To achieve this, we implement activation rounding which scales down the accumulation of parallel bits at the output of the hidden layers of DNN. TALIPOT utilizes most significant bit first (MSB-first) fashion, so we obtain the most valuable bit information first, between the layers of DNN, which ensures the activation rounding process done accurately and efficiently. Thanks to this method, we optimize operating accuracy/energy point by cutting off bits at the output whenever we obtain the desired accuracy. Simulations using the MNIST and CIFAR-10 datasets show that TALIPOT outperforms the state-of-the-art computation techniques in terms of energy consumption. TALIPOT performs the MNIST classification with the energy efficiency of 25.3 TOPS/W and the accuracy of 98.2% in ASIC environment using 40 nm CMOS process.
更多
查看译文
关键词
Activation rounding,ASIC,deep neural network (DNN),hardware accelerator,hybrid number representation (HNR)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要