Deep Neural Network Acceleration Framework Under Hardware Uncertainty

2018 19TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED)(2018)

引用 9|浏览19
暂无评分
摘要
Deep Neural Networks (DNNs) are known as effective model to perform cognitive tasks. However, DNNs are computationally expensive in both train and inference modes as they require the precision of floating point operations. Although, several prior work proposed approximate hardware to accelerate DNNs inference, they have not considered the impact of training on accuracy. In this paper, we propose a general framework called FramNN, which adjusts DNN training model to make it appropriate for underlying hardware. To accelerate training FramNN applies adaptive approximation which dynamically changes the level of hardware approximation depending on the DNN error rate. We test the efficiency of the proposed design over six popular DNN applications. Our evaluation shows that in inference, our design can achieve 1.9x energy efficiency improvement and 1.7x speedup while ensuring less than 1% quality loss. Similarly, in training mode FramNN can achieve 5.0x energy-delay product improvement as compared to baseline AMD GPU.
更多
查看译文
关键词
approximate hardware,DNNs inference,DNN training model,training FramNN,adaptive approximation,hardware approximation,DNN error rate,training mode FramNN,neural network acceleration framework,hardware uncertainty,Deep Neural Networks,floating point operations,DNN applications
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要