Improving Environment Robustness of Deep Reinforcement Learning Approaches for Autonomous Racing Using Bayesian Optimization-based Curriculum Learning
CoRR(2023)
摘要
Deep reinforcement learning (RL) approaches have been broadly applied to a
large number of robotics tasks, such as robot manipulation and autonomous
driving. However, an open problem in deep RL is learning policies that are
robust to variations in the environment, which is an important condition for
such systems to be deployed into real-world, unstructured settings. Curriculum
learning is one approach that has been applied to improve generalization
performance in both supervised and reinforcement learning domains, but
selecting the appropriate curriculum to achieve robustness can be a
user-intensive process. In our work, we show that performing probabilistic
inference of the underlying curriculum-reward function using Bayesian
Optimization can be a promising technique for finding a robust curriculum. We
demonstrate that a curriculum found with Bayesian optimization can outperform a
vanilla deep RL agent and a hand-engineered curriculum in the domain of
autonomous racing with obstacle avoidance. Our code is available at
https://github.com/PRISHIta123/Curriculum_RL_for_Driving.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要