Sampling-based search for a semi-cooperative target.

IROS(2020)

引用 1|浏览6
暂无评分
摘要
Searching for a lost teammate is an important task for multirobot systems. We present a variant of rapidly-expanding random trees (RRT) for generating search paths based on a probabilistic belief of the target teammate’s position. The belief is updated using a hidden Markov model built from knowledge of the target’s planned or historic behavior. For any candidate search path, this belief is used to compute a discounted reward which is a weighted sum of the connection probability at each time step. The RRT search algorithm uses randomly sampled locations to generate candidate vertices and adds candidate vertices to a planning tree based on bounds on the discounted reward. Candidate vertices are along the shortest path from an existing vertex to the sampled location, biasing the search based on the topology of the environment. This method produces high quality search paths which are not constrained to a grid and can be computed fast enough to be used in real time. Compared with two other strategies, it found the target significantly faster in the most difficult 60% of situations and was similar in the easier 40% of situations.
更多
查看译文
关键词
historic behavior,candidate search path,discounted reward,weighted sum,connection probability,RRT search algorithm,candidate vertices,planning tree,shortest path,sampled location,high quality search paths,lost teammate,multirobot systems,random trees,probabilistic belief,target teammate,hidden Markov model,planned behavior
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要