Accel-NASBench: Sustainable Benchmarking for Accelerator-Aware NAS
arxiv(2024)
摘要
One of the primary challenges impeding the progress of Neural Architecture
Search (NAS) is its extensive reliance on exorbitant computational resources.
NAS benchmarks aim to simulate runs of NAS experiments at zero cost,
remediating the need for extensive compute. However, existing NAS benchmarks
use synthetic datasets and model proxies that make simplified assumptions about
the characteristics of these datasets and models, leading to unrealistic
evaluations. We present a technique that allows searching for training proxies
that reduce the cost of benchmark construction by significant margins, making
it possible to construct realistic NAS benchmarks for large-scale datasets.
Using this technique, we construct an open-source bi-objective NAS benchmark
for the ImageNet2012 dataset combined with the on-device performance of
accelerators, including GPUs, TPUs, and FPGAs. Through extensive
experimentation with various NAS optimizers and hardware platforms, we show
that the benchmark is accurate and allows searching for state-of-the-art
hardware-aware models at zero cost.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要