Small-loss bounds for online learning with partial information.

conference on learning theory(2018)

引用 43|浏览178
暂无评分
摘要
We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called small-lossu0027u0027 (o(alpha L^{star})) regret bounds with high probability, where (alpha) is the independence number of the graph, and (L^{star}) is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators.In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of (widetilde{O}(sqrt{dL^{star}})) for actual regret, where (d) is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal (widetilde{O}(sqrt{kappa L^{star}})) regret guarantee for fixed feedback graphs with clique-partition number at most (kappa).
更多
查看译文
关键词
online learning,feedback graphs,bandit algorithms,semi-bandits,contextual bandits,partial information,regret bounds,small-loss bounds,first-order bounds,high probability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要