Multi-device, Multi-tenant Model Selection with GP-EI.

arXiv: Learning(2018)

引用 23|浏览69
暂无评分
摘要
Bayesian optimization is the core technique behind the emergence of AutoML, which holds the promise of automatically searching for models and hyperparameters to make machine learning techniques more accessible. As such services are moving towards the cloud, we ask -- {em When multiple AutoML users share the same computational infrastructure, how should we allocate resources to maximize the global happiness of all users?} We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, and develop a novel multi-device, multi-tenant extension that is aware of emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given $N$ users and $M$ devices, we obtain a regret bound of $O((text{bf {MIU}}(T,K) + M)frac{N^2}{M})$, where $text{bf {MIU}}(T,K)$ refers to the maximal incremental uncertainty up to time $T$ for the covariance matrix $K$. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要