Approximation algorithms for stochastic orienteering

SODA(2012)

引用 57|浏览27
暂无评分
摘要
In the Stochastic Orienteering problem, we are given a metric, where each node also has a job located there with some deterministic reward and a random size. (Think of the jobs as being chores one needs to run, and the sizes as the amount of time it takes to do the chore.) The goal is to adaptively decide which nodes to visit to maximize total expected reward, subject to the constraint that the total distance traveled plus the total size of jobs processed is at most a given budget of B. (I.e., we get reward for all those chores we finish by the end of the day). The (random) size of a job is not known until it is completely processed. Hence the problem combines aspects of both the stochastic knapsack problem with uncertain item sizes and the deterministic orienteering problem of using a limited travel time to maximize gathered rewards located at nodes. In this paper, we present a constant-factor approximation algorithm for the best non-adaptive policy for the Stochastic Orienteering problem. We also show a small adaptivity gap---i.e., the existence of a non-adaptive policy whose reward is at least an Ω(1/log log B) fraction of the optimal expected reward---and hence we also get an O(log log B)-approximation algorithm for the adaptive problem. Finally we address the case when the node rewards are also random and could be correlated with the waiting time, and give a non-adaptive policy which is an O(log n logB)-approximation to the best adaptive policy on n-node metrics with budget B.
更多
查看译文
关键词
stochastic orienteering,total expected reward,deterministic reward,stochastic knapsack problem,non-adaptive policy,node reward,adaptive problem,Stochastic Orienteering problem,approximation algorithm,deterministic orienteering problem,adaptive policy,log log B
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要