The Hardware and Algorithm Co-Design for Energy-Efficient DNN Processor on Edge/Mobile Devices

IEEE Transactions on Circuits and Systems I: Regular Papers(2020)

引用 28|浏览14
暂无评分
摘要
Deep neural network (DNN) has been widely studied due to its high performance and usability for various applications such as image classification, detection, segmentation, translation, and action recognition. Thanks to the universal applications and high performance of DNN algorithm, DNN is adopted for various AI platforms, including edge/mobile devices as well as cloud servers. However, high-performance DNN requires a large amount of computation and memory access, making it challenging to implement DNN operation on edge/mobile. There have been several ways to solve these problems, including algorithms as well as hardware for DNN. Algorithms that help accelerate DNN in hardware enable much more efficient operation of high-performance AI. This article aims to provide an overview of the recent hardware and algorithm co-design schemes enabling efficient processing of DNNs. Specifically, it will provide algorithm optimization methods for DNN structure, neurons, synapses, and data types. This paper also introduces optimization methods for hardware architectures, PE array, data-path control, and microarchitecture of PE. And we will also show examples of DNN algorithm and hardware co-designed ASICs.
更多
查看译文
关键词
Hardware,Optimization,Computer architecture,Neurons,Artificial intelligence,Performance evaluation,Synapses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要