Bandits with Movement Costs and Adaptive Pricing

COLT(2017)

引用 27|浏览89
暂无评分
摘要
We extend the model of Multi-armed Bandit with unit switching cost to incorporate a metric between the actions. We consider the case where the metric over the actions can be modeled by a complete binary tree, and the distance between two leaves is the size of the subtree of their least common ancestor, which abstracts the case that the actions are points on the continuous interval [0,1] and the switching cost is their distance. In this setting, we give a new algorithm that establishes a regret of O(√(kT) + T/k), where k is the number of actions and T is the time horizon. When the set of actions corresponds to whole [0,1] interval we can exploit our method for the task of bandit learning with Lipschitz loss functions, where our algorithm achieves an optimal regret rate of Θ(T^2/3), which is the same rate one obtains when there is no penalty for movements. As our main application, we use our new algorithm to solve an adaptive pricing problem. Specifically, we consider the case of a single seller faced with a stream of patient buyers. Each buyer has a private value and a window of time in which they are interested in buying, and they buy at the lowest price in the window, if it is below their value. We show that with an appropriate discretization of the prices, the seller can achieve a regret of O(T^2/3) compared to the best fixed price in hindsight, which outperform the previous regret bound of O(T^3/4) for the problem.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要