Distributed Stochastic Online Learning Policies for Opportunistic Spectrum Access

IEEE Transactions on Signal Processing(2014)

引用 62|浏览18
暂无评分
摘要
The fundamental problem of multiple secondary users contending for opportunistic spectrum access over multiple channels in cognitive radio networks has been formulated recently as a decentralized multi-armed bandit (D-MAB) problem. In a D-MAB problem there are M users and N arms (channels) that each offer i.i.d. stochastic rewards with unknown means so long as they are accessed without collision. The goal is to design distributed online learning policies that incur minimal regret. We consider two related problem formulations in this paper. First, we consider the setting where the users have a prioritized ranking, such that it is desired for the K-th-ranked user to learn to access the arm offering the K-th highest mean reward. For this problem, we present DLP, the first distributed policy that yields regret that is uniformly logarithmic over time without requiring any prior assumption about the mean rewards. Second, we consider the case when a fair access policy is required, i.e., it is desired for all users to experience the same mean reward. For this problem, we present DLF, a distributed policy that yields order-optimal regret scaling with respect to the number of users and arms, better than previously proposed policies in the literature. Both of our distributed policies make use of an innovative modification of the well-known UCB1 policy for the classic multi-armed bandit problem that allows a single user to learn how to play the arm that yields the K K-th largest mean reward.
更多
查看译文
关键词
UCB1 policy,decentralized multiarmed bandit problem,cognitive radio networks,decentralized multi-armed bandit,stochastic rewards,cognitive radio,dynamic spectrum access,multiarmed bandit problem,Online learning,telecommunication channels,opportunistic spectrum access,order-optimal regret scaling,multiple channels,distributed stochastic online learning policies,D-MAB problem,innovative modification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要