Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits
arxiv(2023)
摘要
In stochastic contextual bandits, an agent sequentially makes actions from a
time-dependent action set based on past experience to minimize the cumulative
regret. Like many other machine learning algorithms, the performance of bandits
heavily depends on the values of hyperparameters, and theoretically derived
parameter values may lead to unsatisfactory results in practice. Moreover, it
is infeasible to use offline tuning methods like cross-validation to choose
hyperparameters under the bandit environment, as the decisions should be made
in real-time. To address this challenge, we propose the first online continuous
hyperparameter tuning framework for contextual bandits to learn the optimal
parameter configuration in practice within a search space on the fly.
Specifically, we use a double-layer bandit framework named CDT (Continuous
Dynamic Tuning) and formulate the hyperparameter optimization as a
non-stationary continuum-armed bandit, where each arm represents a combination
of hyperparameters, and the corresponding reward is the algorithmic result. For
the top layer, we propose the Zooming TS algorithm that utilizes Thompson
Sampling (TS) for exploration and a restart technique to get around the
switching environment. The proposed CDT framework can be easily
utilized to tune contextual bandit algorithms without any pre-specified
candidate set for multiple hyperparameters. We further show that it could
achieve a sublinear regret in theory and performs consistently better than all
existing methods on both synthetic and real datasets.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要