Index-based sampling policies for tracking dynamic networks under sampling constraints

INFOCOM(2011)

引用 9|浏览17
暂无评分
摘要
We consider the problem of tracking the topology of a large-scale dynamic network with limited monitoring resources. By modeling the dynamics of links as independent ON-OFF Markov chains, we formulate the problem as that of maximizing the overall accuracy of tracking link states when only a limited number of network elements can be monitored at each time step. We consider two forms of sampling policies: link sampling, where we directly observe the selected links, and node sampling, where we observe states of the links adjacent to the selected nodes. We reduce the link sampling problem to a Restless Multi-armed Bandit (RMB) and prove its indexability under certain conditions. By applying the Whittle's index policy, we develop an efficient link sampling policy with methods to compute the Whittle's index explicitly. Under node sampling, we use a linear programming (LP) formulation to derive an extended policy that can be reduced to determining the nodes with maximum coverage of the Whittle's indices. We also derive performance upper bounds in both sampling scenarios. Simulations show the efficacy of the proposed policies. Compared with the myopic policy, our solution achieves significantly better tracking performance for heterogeneous links.
更多
查看译文
关键词
rmb,restless multiarmed bandit,node sampling,whittle's index policy,heterogeneous links,on-off markov chains,restless multi-armed bandits,sampling constraints,linear programming,myopic policy,index-based sampling policies,lp formulation,network topology tracking,dynamic network tracking,link sampling,large-scale dynamic network topology,sampling methods,whittle index policy,linear programming formulation,markov processes,network theory (graphs),network sampling,network topology,upper bound,accuracy,linear program,markov chain,indexation,indexes,markov process,indexing terms,multi armed bandit
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要