Stability-penalty-adaptive follow-the-regularized-leader: Sparsity, game-dependency, and best-of-both-worlds
NeurIPS(2023)
摘要
Adaptivity to the difficulties of a problem is a key property in sequential
decision-making problems to broaden the applicability of algorithms.
Follow-the-regularized-leader (FTRL) has recently emerged as one of the most
promising approaches for obtaining various types of adaptivity in bandit
problems. Aiming to further generalize this adaptivity, we develop a generic
adaptive learning rate, called stability-penalty-adaptive (SPA) learning rate
for FTRL. This learning rate yields a regret bound jointly depending on
stability and penalty of the algorithm, into which the regret of FTRL is
typically decomposed. With this result, we establish several algorithms with
three types of adaptivity: sparsity, game-dependency, and best-of-both-worlds
(BOBW). Despite the fact that sparsity appears frequently in real problems,
existing sparse multi-armed bandit algorithms with k-arms assume that the
sparsity level s ≤ k is known in advance, which is often not the case in
real-world scenarios. To address this issue, we first establish s-agnostic
algorithms with regret bounds of Õ(√(sT)) in the adversarial
regime for T rounds, which matches the existing lower bound up to a
logarithmic factor. Meanwhile, BOBW algorithms aim to achieve a near-optimal
regret in both the stochastic and adversarial regimes. Leveraging the SPA
learning rate and the technique for s-agnostic algorithms combined with a new
analysis to bound the variation in FTRL output in response to changes in a
regularizer, we establish the first BOBW algorithm with a sparsity-dependent
bound. Additionally, we explore partial monitoring and demonstrate that the
proposed SPA learning rate framework allows us to achieve a game-dependent
bound and the BOBW simultaneously.
更多查看译文
关键词
stability-penalty-adaptive,follow-the-regularized-leader,game-dependency,best-of-both-worlds
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要