Searching Toward Pareto-Optimal Device-Aware Neural Architectures

2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2018)

引用 23|浏览107
暂无评分
摘要
Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performance in many tasks such as image classification and language understanding. However, most existing works only optimize for model accuracy and largely ignore other important factors imposed by the underlying hardware and devices, such as latency and energy, when making inference. In this paper, we first introduce the problem of NAS and provide a survey on recent works. Then we deep dive into two recent advancements on extending NAS into multiple-objective frameworks: MONAS and DPP-Net. Both MONAS and DPP-Net are capable of optimizing accuracy and other objectives imposed by devices, searching for neural architectures that can be best deployed on a wide spectrum of devices: from embedded systems and mobile devices to workstations. Experimental results are poised to show that architectures found by MONAS and DPP-Net achieves Pareto optimality w.r.t the given objectives for various devices.
更多
查看译文
关键词
model accuracy,latency energy,NAS,multiple-objective frameworks,neural architectures,mobile devices,Pareto-optimal device-aware,state-of-the-art performance,image classification,language understanding,DPP-Net,neural architectural search,Pareto optimality,MONAS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要