Multi-Task Reinforcement Learning in Reproducing Kernel Hilbert Spaces via Cross-Learning

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2021)

引用 5|浏览63
暂无评分
摘要
Reinforcement learning is a framework to optimize an agent's policy using rewards that are revealed by the system as a response to an action. In its standard form, reinforcement learning involves a single agent that uses its policy to accomplish a specific task. These methods require large amounts of reward samples to achieve good performance, and may not generalize well when the task is modified, even if the new task is related. In this paper we are interested in a collaborative scheme in which multiple policies are optimized jointly. To this end, we we introduce cross-learning, in which policies are trained for related tasks in separate environments, and they are constrained to be close to one another. Two properties make our new approach attractive: (i) it produces a multi-task central policy that can be used as a starting point to adapt quickly to one of the tasks trained for, and (ii) as in meta-learning, it adapts to environments related but different to those seen during training. We focus on policies belonging to reproducing kernel Hilbert spaces for which we bound the distance between the task-specific policies and the cross-learned policy. To solve the resulting optimization problem, we resort to a projected policy gradient algorithm and prove that it converges to a near-optimal solution with high probability. We evaluate our methodology with a navigation example in which an agent moves through environments with obstacles of multiple shapes and avoids obstacles not trained for.
更多
查看译文
关键词
Task analysis, Kernel, Training, Navigation, Convergence, Reinforcement learning, Optimization, Reinforcement learning, multi-task learning, meta-learning, optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要