A method to estimate the energy consumption of deep neural networks

2017 FIFTY-FIRST ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS(2017)

引用 94|浏览65
暂无评分
摘要
Deep Neural Networks (DNNs) have enabled state-of-the-art accuracy on many challenging artificial intelligence tasks. While most of the computation currently resides in the cloud, it is desirable to embed DNN processing locally near the sensor due to privacy, security, and latency concerns or limitations in communication bandwidth. Accordingly, there has been increasing interest in the research community to design energy-efficient DNNs. However, estimating energy consumption from the DNN model is much more difficult than other metrics such as storage cost (model size) and throughput (number of operations). This is due to the fact that a significant portion of the energy is consumed by data movement, which is difficult to extract directly from the DNN model. This work proposes an energy estimation methodology that can estimate the energy consumption of a DNN based on its architecture, sparsity, and bitwidth. This methodology can be used to evaluate the various DNN architectures and energy-efficient techniques that are currently being proposed in the field and guide the design of energy-efficient DNNs. We have released an online version of the energy estimation tool at energyestimation.mit.edu. We believe that this method will play a critical role in bridging the gap between algorithm and hardware design and provide useful insights for the development of energy-efficient DNNs.
更多
查看译文
关键词
Deep learning, deep neural network, energy estimation, energy metric, machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要