Non-Markovian Rewards Expressed in LTL: Guiding Search Via Reward Shaping.
SOCS(2017)
摘要
We propose an approach to solving Markov Decision Processes with non-Markovian rewards specified in Linear Temporal Logic interpreted over finite traces (LTL-f). Our approach integrates automata representations of LTL-f formulae into compiled MDPs that can be solved by off-the-shelf MDP planners, exploiting reward shaping to help guide search. Experiments with state-of-the-art UCT-based MDP planner PROST show automata-based reward shaping to be an effective method to guide search, producing solutions of superior quality, while maintaining policy optimality guarantees.
更多查看译文
关键词
ltl,search,non-markovian
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络