Adaptive Multi-Goal Exploration

INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151(2022)

引用 3|浏览38
暂无评分
摘要
We introduce a generic strategy for provably efficient multi-goal exploration. It relies on ADAGOAL, a novel goal selection scheme that leverages a measure of uncertainty in reaching states to adaptively target goals that are neither too difficult nor too easy. We show how ADAGOAL can be used to tackle the objective of learning an e-optimal goal-conditioned policy for the (initially unknown) set of goal states that are reachable within L steps in expectation from a reference state s(0) in a reward-free Markov decision process. In the tabular case with S states and A actions, our algorithm requires (O) over tilde (L(3)SA epsilon(-2)) exploration steps, which is nearly minimax optimal. We also readily instantiate ADAGoAL in linear mixture Markov decision processes, yielding the first goal-oriented PAC guarantee with linear function approximation. Beyond its strong theoretical guarantees, we anchor ADAGOAL in goal-conditioned deep reinforcement learning, both conceptually and empirically, by connecting its idea of selecting "uncertain" goals to maximizing value ensemble disagreement.
更多
查看译文
关键词
exploration,multi-goal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要