Search-Time Efficient Device Constraints-Aware Neural Architecture Search

PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023(2023)

引用 0|浏览5
暂无评分
摘要
Edge computing aims to enable edge devices, such as IoT devices, to process data locally instead of relying on the cloud. However, deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive. Creating manual architectures specialized for each device is infeasible due to their varying memory and computational constraints. To address these concerns, we automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS). We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints such as model size and floating-point operations. It incorporates weight sharing and channel bottleneck techniques to speed up the search time. Based on our experiments, we see that DCA-NAS outperforms manual architectures for similar sized models and is comparable to popular mobile architectures on various image classification datasets like CIFAR-10, CIFAR-100, and Imagenet-1k. Experiments with search spaces-DARTS and NAS-Bench-201 show the generalization capabilities of DCA-NAS. On further evaluating our approach on HardwareNAS-Bench, device-specific architectures with low inference latency and state-of-the-art performance were discovered.
更多
查看译文
关键词
Neural Architecture Search,DARTS,Meta-Learning,Edge Inference,Constrained Optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要