(ϵ, u)-Adaptive Regret Minimization in Heavy-Tailed Bandits

CoRR(2023)

引用 0|浏览0
暂无评分
摘要
Heavy-tailed distributions naturally arise in several settings, from finance to telecommunications. While regret minimization under subgaussian or bounded rewards has been widely studied, learning with heavy-tailed distributions only gained popularity over the last decade. In this paper, we consider the setting in which the reward distributions have finite absolute raw moments of maximum order 1+ϵ, uniformly bounded by a constant u<+∞, for some ϵ∈ (0,1]. In this setting, we study the regret minimization problem when ϵ and u are unknown to the learner and it has to adapt. First, we show that adaptation comes at a cost and derive two negative results proving that the same regret guarantees of the non-adaptive case cannot be achieved with no further assumptions. Then, we devise and analyze a fully data-driven trimmed mean estimator and propose a novel adaptive regret minimization algorithm, AdaR-UCB, that leverages such an estimator. Finally, we show that AdaR-UCB is the first algorithm that, under a known distributional assumption, enjoys regret guarantees nearly matching those of the non-adaptive heavy-tailed case.
更多
查看译文
关键词
adaptive regret minimization,bandits,heavy-tailed
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要